2026-02-05 00:00:07.336369 | Job console starting 2026-02-05 00:00:07.361639 | Updating git repos 2026-02-05 00:00:07.551704 | Cloning repos into workspace 2026-02-05 00:00:07.850512 | Restoring repo states 2026-02-05 00:00:07.882288 | Merging changes 2026-02-05 00:00:07.882316 | Checking out repos 2026-02-05 00:00:08.236937 | Preparing playbooks 2026-02-05 00:00:09.242800 | Running Ansible setup 2026-02-05 00:00:16.591448 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-05 00:00:18.638320 | 2026-02-05 00:00:18.638450 | PLAY [Base pre] 2026-02-05 00:00:18.736340 | 2026-02-05 00:00:18.736469 | TASK [Setup log path fact] 2026-02-05 00:00:18.824806 | orchestrator | ok 2026-02-05 00:00:18.923247 | 2026-02-05 00:00:18.923398 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-05 00:00:18.975159 | orchestrator | ok 2026-02-05 00:00:19.004428 | 2026-02-05 00:00:19.045049 | TASK [emit-job-header : Print job information] 2026-02-05 00:00:19.132410 | # Job Information 2026-02-05 00:00:19.132571 | Ansible Version: 2.16.14 2026-02-05 00:00:19.132605 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-02-05 00:00:19.132639 | Pipeline: periodic-midnight 2026-02-05 00:00:19.132662 | Executor: 521e9411259a 2026-02-05 00:00:19.132682 | Triggered by: https://github.com/osism/testbed 2026-02-05 00:00:19.132704 | Event ID: d2fbb7a4c0254005bfb8ea044578dfa6 2026-02-05 00:00:19.144201 | 2026-02-05 00:00:19.144315 | LOOP [emit-job-header : Print node information] 2026-02-05 00:00:19.565578 | orchestrator | ok: 2026-02-05 00:00:19.565777 | orchestrator | # Node Information 2026-02-05 00:00:19.565807 | orchestrator | Inventory Hostname: orchestrator 2026-02-05 00:00:19.565828 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-05 00:00:19.565846 | orchestrator | Username: zuul-testbed05 2026-02-05 00:00:19.565863 | orchestrator | Distro: Debian 12.13 2026-02-05 00:00:19.565882 | orchestrator | Provider: static-testbed 2026-02-05 00:00:19.565900 | orchestrator | Region: 2026-02-05 00:00:19.565917 | orchestrator | Label: testbed-orchestrator 2026-02-05 00:00:19.565933 | orchestrator | Product Name: OpenStack Nova 2026-02-05 00:00:19.565948 | orchestrator | Interface IP: 81.163.193.140 2026-02-05 00:00:19.584185 | 2026-02-05 00:00:19.584280 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-05 00:00:20.478516 | orchestrator -> localhost | changed 2026-02-05 00:00:20.485219 | 2026-02-05 00:00:20.485304 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-05 00:00:23.825859 | orchestrator -> localhost | changed 2026-02-05 00:00:23.839239 | 2026-02-05 00:00:23.839330 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-05 00:00:24.819867 | orchestrator -> localhost | ok 2026-02-05 00:00:24.825605 | 2026-02-05 00:00:24.825693 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-05 00:00:24.883451 | orchestrator | ok 2026-02-05 00:00:24.936119 | orchestrator | included: /var/lib/zuul/builds/1eb1a1bfd15e4e1e93b557e18e3ea3fc/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-05 00:00:24.971365 | 2026-02-05 00:00:24.971478 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-05 00:00:27.738598 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-05 00:00:27.738806 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/1eb1a1bfd15e4e1e93b557e18e3ea3fc/work/1eb1a1bfd15e4e1e93b557e18e3ea3fc_id_rsa 2026-02-05 00:00:27.738872 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/1eb1a1bfd15e4e1e93b557e18e3ea3fc/work/1eb1a1bfd15e4e1e93b557e18e3ea3fc_id_rsa.pub 2026-02-05 00:00:27.738901 | orchestrator -> localhost | The key fingerprint is: 2026-02-05 00:00:27.738926 | orchestrator -> localhost | SHA256:DMIX0clejNFfz9GScke3qJuZJC6RFl32Z/j2o7f2FKU zuul-build-sshkey 2026-02-05 00:00:27.738948 | orchestrator -> localhost | The key's randomart image is: 2026-02-05 00:00:27.738979 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-05 00:00:27.739014 | orchestrator -> localhost | | o+.* o o+| 2026-02-05 00:00:27.739036 | orchestrator -> localhost | | . .* * o B.=| 2026-02-05 00:00:27.739057 | orchestrator -> localhost | | o oo o . B Oo| 2026-02-05 00:00:27.739077 | orchestrator -> localhost | | o o+ o +.+| 2026-02-05 00:00:27.739098 | orchestrator -> localhost | | +S. o E+ | 2026-02-05 00:00:27.739124 | orchestrator -> localhost | | . o o = . o| 2026-02-05 00:00:27.739146 | orchestrator -> localhost | | . . = .o| 2026-02-05 00:00:27.739166 | orchestrator -> localhost | | . .+.| 2026-02-05 00:00:27.739186 | orchestrator -> localhost | | .o.+| 2026-02-05 00:00:27.739207 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-05 00:00:27.739259 | orchestrator -> localhost | ok: Runtime: 0:00:01.052537 2026-02-05 00:00:27.748152 | 2026-02-05 00:00:27.748257 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-05 00:00:27.802111 | orchestrator | ok 2026-02-05 00:00:27.826180 | orchestrator | included: /var/lib/zuul/builds/1eb1a1bfd15e4e1e93b557e18e3ea3fc/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-05 00:00:27.871738 | 2026-02-05 00:00:27.871845 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-05 00:00:27.938091 | orchestrator | skipping: Conditional result was False 2026-02-05 00:00:27.977035 | 2026-02-05 00:00:27.977155 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-05 00:00:28.746407 | orchestrator | changed 2026-02-05 00:00:28.757446 | 2026-02-05 00:00:28.757548 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-05 00:00:29.083960 | orchestrator | ok 2026-02-05 00:00:29.094362 | 2026-02-05 00:00:29.096275 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-05 00:00:29.644216 | orchestrator | ok 2026-02-05 00:00:29.661061 | 2026-02-05 00:00:29.661172 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-05 00:00:30.217972 | orchestrator | ok 2026-02-05 00:00:30.227406 | 2026-02-05 00:00:30.227506 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-05 00:00:30.270817 | orchestrator | skipping: Conditional result was False 2026-02-05 00:00:30.277591 | 2026-02-05 00:00:30.277672 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-05 00:00:31.804746 | orchestrator -> localhost | changed 2026-02-05 00:00:31.822443 | 2026-02-05 00:00:31.822542 | TASK [add-build-sshkey : Add back temp key] 2026-02-05 00:00:33.011421 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/1eb1a1bfd15e4e1e93b557e18e3ea3fc/work/1eb1a1bfd15e4e1e93b557e18e3ea3fc_id_rsa (zuul-build-sshkey) 2026-02-05 00:00:33.011611 | orchestrator -> localhost | ok: Runtime: 0:00:00.029981 2026-02-05 00:00:33.018105 | 2026-02-05 00:00:33.018187 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-05 00:00:33.773973 | orchestrator | ok 2026-02-05 00:00:33.778779 | 2026-02-05 00:00:33.778890 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-05 00:00:33.840472 | orchestrator | skipping: Conditional result was False 2026-02-05 00:00:33.926042 | 2026-02-05 00:00:33.926141 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-05 00:00:34.769063 | orchestrator | ok 2026-02-05 00:00:34.814762 | 2026-02-05 00:00:34.815303 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-05 00:00:34.888109 | orchestrator | ok 2026-02-05 00:00:34.894015 | 2026-02-05 00:00:34.894100 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-05 00:00:35.542580 | orchestrator -> localhost | ok 2026-02-05 00:00:35.550204 | 2026-02-05 00:00:35.550300 | TASK [validate-host : Collect information about the host] 2026-02-05 00:00:37.926058 | orchestrator | ok 2026-02-05 00:00:37.968523 | 2026-02-05 00:00:37.968651 | TASK [validate-host : Sanitize hostname] 2026-02-05 00:00:38.109187 | orchestrator | ok 2026-02-05 00:00:38.120650 | 2026-02-05 00:00:38.120757 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-05 00:00:39.977291 | orchestrator -> localhost | changed 2026-02-05 00:00:39.983383 | 2026-02-05 00:00:39.983482 | TASK [validate-host : Collect information about zuul worker] 2026-02-05 00:00:40.773970 | orchestrator | ok 2026-02-05 00:00:40.779782 | 2026-02-05 00:00:40.779883 | TASK [validate-host : Write out all zuul information for each host] 2026-02-05 00:00:42.244779 | orchestrator -> localhost | changed 2026-02-05 00:00:42.265355 | 2026-02-05 00:00:42.265459 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-05 00:00:42.564062 | orchestrator | ok 2026-02-05 00:00:42.583545 | 2026-02-05 00:00:42.583650 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-05 00:01:58.910464 | orchestrator | changed: 2026-02-05 00:01:58.911902 | orchestrator | .d..t...... src/ 2026-02-05 00:01:58.912044 | orchestrator | .d..t...... src/github.com/ 2026-02-05 00:01:58.912079 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-05 00:01:58.912105 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-05 00:01:58.912130 | orchestrator | RedHat.yml 2026-02-05 00:01:58.928427 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-05 00:01:58.928445 | orchestrator | RedHat.yml 2026-02-05 00:01:58.928498 | orchestrator | = 1.53.0"... 2026-02-05 00:02:10.928050 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-05 00:02:11.076336 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-05 00:02:11.601482 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-05 00:02:11.670401 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-05 00:02:12.356845 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-05 00:02:12.426858 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-05 00:02:12.928199 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-05 00:02:12.928267 | orchestrator | 2026-02-05 00:02:12.928275 | orchestrator | Providers are signed by their developers. 2026-02-05 00:02:12.928280 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-05 00:02:12.928285 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-05 00:02:12.928291 | orchestrator | 2026-02-05 00:02:12.928295 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-05 00:02:12.928300 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-05 00:02:12.928312 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-05 00:02:12.928316 | orchestrator | you run "tofu init" in the future. 2026-02-05 00:02:12.928568 | orchestrator | 2026-02-05 00:02:12.928584 | orchestrator | OpenTofu has been successfully initialized! 2026-02-05 00:02:12.928612 | orchestrator | 2026-02-05 00:02:12.928618 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-05 00:02:12.928622 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-05 00:02:12.928627 | orchestrator | should now work. 2026-02-05 00:02:12.928630 | orchestrator | 2026-02-05 00:02:12.928634 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-05 00:02:12.928641 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-05 00:02:12.928646 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-05 00:02:13.115922 | orchestrator | Created and switched to workspace "ci"! 2026-02-05 00:02:13.115959 | orchestrator | 2026-02-05 00:02:13.115965 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-05 00:02:13.115970 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-05 00:02:13.115987 | orchestrator | for this configuration. 2026-02-05 00:02:13.277990 | orchestrator | ci.auto.tfvars 2026-02-05 00:02:13.586087 | orchestrator | default_custom.tf 2026-02-05 00:02:14.898074 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-05 00:02:15.438420 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-05 00:02:15.664570 | orchestrator | 2026-02-05 00:02:15.664637 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-05 00:02:15.664651 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-05 00:02:15.665721 | orchestrator | + create 2026-02-05 00:02:15.665865 | orchestrator | <= read (data resources) 2026-02-05 00:02:15.665886 | orchestrator | 2026-02-05 00:02:15.665893 | orchestrator | OpenTofu will perform the following actions: 2026-02-05 00:02:15.666955 | orchestrator | 2026-02-05 00:02:15.667001 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-05 00:02:15.667011 | orchestrator | # (config refers to values not yet known) 2026-02-05 00:02:15.667020 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-05 00:02:15.667029 | orchestrator | + checksum = (known after apply) 2026-02-05 00:02:15.667037 | orchestrator | + created_at = (known after apply) 2026-02-05 00:02:15.667044 | orchestrator | + file = (known after apply) 2026-02-05 00:02:15.667052 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.667074 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.667080 | orchestrator | + min_disk_gb = (known after apply) 2026-02-05 00:02:15.667084 | orchestrator | + min_ram_mb = (known after apply) 2026-02-05 00:02:15.667089 | orchestrator | + most_recent = true 2026-02-05 00:02:15.667094 | orchestrator | + name = (known after apply) 2026-02-05 00:02:15.667099 | orchestrator | + protected = (known after apply) 2026-02-05 00:02:15.667104 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.667111 | orchestrator | + schema = (known after apply) 2026-02-05 00:02:15.667116 | orchestrator | + size_bytes = (known after apply) 2026-02-05 00:02:15.667121 | orchestrator | + tags = (known after apply) 2026-02-05 00:02:15.667125 | orchestrator | + updated_at = (known after apply) 2026-02-05 00:02:15.667130 | orchestrator | } 2026-02-05 00:02:15.667242 | orchestrator | 2026-02-05 00:02:15.667257 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-05 00:02:15.667262 | orchestrator | # (config refers to values not yet known) 2026-02-05 00:02:15.667267 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-05 00:02:15.667272 | orchestrator | + checksum = (known after apply) 2026-02-05 00:02:15.667277 | orchestrator | + created_at = (known after apply) 2026-02-05 00:02:15.667281 | orchestrator | + file = (known after apply) 2026-02-05 00:02:15.667286 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.667290 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.667295 | orchestrator | + min_disk_gb = (known after apply) 2026-02-05 00:02:15.667299 | orchestrator | + min_ram_mb = (known after apply) 2026-02-05 00:02:15.667304 | orchestrator | + most_recent = true 2026-02-05 00:02:15.667309 | orchestrator | + name = (known after apply) 2026-02-05 00:02:15.667313 | orchestrator | + protected = (known after apply) 2026-02-05 00:02:15.667318 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.667322 | orchestrator | + schema = (known after apply) 2026-02-05 00:02:15.667327 | orchestrator | + size_bytes = (known after apply) 2026-02-05 00:02:15.667332 | orchestrator | + tags = (known after apply) 2026-02-05 00:02:15.667336 | orchestrator | + updated_at = (known after apply) 2026-02-05 00:02:15.667341 | orchestrator | } 2026-02-05 00:02:15.667476 | orchestrator | 2026-02-05 00:02:15.667492 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-05 00:02:15.667498 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-05 00:02:15.667502 | orchestrator | + content = (known after apply) 2026-02-05 00:02:15.667507 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-05 00:02:15.667512 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-05 00:02:15.667517 | orchestrator | + content_md5 = (known after apply) 2026-02-05 00:02:15.667521 | orchestrator | + content_sha1 = (known after apply) 2026-02-05 00:02:15.667526 | orchestrator | + content_sha256 = (known after apply) 2026-02-05 00:02:15.667530 | orchestrator | + content_sha512 = (known after apply) 2026-02-05 00:02:15.667535 | orchestrator | + directory_permission = "0777" 2026-02-05 00:02:15.667539 | orchestrator | + file_permission = "0644" 2026-02-05 00:02:15.667544 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-05 00:02:15.667548 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.667553 | orchestrator | } 2026-02-05 00:02:15.667696 | orchestrator | 2026-02-05 00:02:15.667712 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-05 00:02:15.667717 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-05 00:02:15.667721 | orchestrator | + content = (known after apply) 2026-02-05 00:02:15.667726 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-05 00:02:15.667730 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-05 00:02:15.667734 | orchestrator | + content_md5 = (known after apply) 2026-02-05 00:02:15.667738 | orchestrator | + content_sha1 = (known after apply) 2026-02-05 00:02:15.667742 | orchestrator | + content_sha256 = (known after apply) 2026-02-05 00:02:15.667746 | orchestrator | + content_sha512 = (known after apply) 2026-02-05 00:02:15.667750 | orchestrator | + directory_permission = "0777" 2026-02-05 00:02:15.667754 | orchestrator | + file_permission = "0644" 2026-02-05 00:02:15.667767 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-05 00:02:15.667771 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.667775 | orchestrator | } 2026-02-05 00:02:15.667849 | orchestrator | 2026-02-05 00:02:15.667869 | orchestrator | # local_file.inventory will be created 2026-02-05 00:02:15.667875 | orchestrator | + resource "local_file" "inventory" { 2026-02-05 00:02:15.667879 | orchestrator | + content = (known after apply) 2026-02-05 00:02:15.667883 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-05 00:02:15.667887 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-05 00:02:15.667891 | orchestrator | + content_md5 = (known after apply) 2026-02-05 00:02:15.667896 | orchestrator | + content_sha1 = (known after apply) 2026-02-05 00:02:15.667900 | orchestrator | + content_sha256 = (known after apply) 2026-02-05 00:02:15.667904 | orchestrator | + content_sha512 = (known after apply) 2026-02-05 00:02:15.667908 | orchestrator | + directory_permission = "0777" 2026-02-05 00:02:15.667912 | orchestrator | + file_permission = "0644" 2026-02-05 00:02:15.667917 | orchestrator | + filename = "inventory.ci" 2026-02-05 00:02:15.667921 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.667925 | orchestrator | } 2026-02-05 00:02:15.668002 | orchestrator | 2026-02-05 00:02:15.668019 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-05 00:02:15.668027 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-05 00:02:15.668034 | orchestrator | + content = (sensitive value) 2026-02-05 00:02:15.668040 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-05 00:02:15.668047 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-05 00:02:15.668053 | orchestrator | + content_md5 = (known after apply) 2026-02-05 00:02:15.668060 | orchestrator | + content_sha1 = (known after apply) 2026-02-05 00:02:15.668066 | orchestrator | + content_sha256 = (known after apply) 2026-02-05 00:02:15.668073 | orchestrator | + content_sha512 = (known after apply) 2026-02-05 00:02:15.668079 | orchestrator | + directory_permission = "0700" 2026-02-05 00:02:15.668086 | orchestrator | + file_permission = "0600" 2026-02-05 00:02:15.668092 | orchestrator | + filename = ".id_rsa.ci" 2026-02-05 00:02:15.668099 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.668105 | orchestrator | } 2026-02-05 00:02:15.668144 | orchestrator | 2026-02-05 00:02:15.668161 | orchestrator | # null_resource.node_semaphore will be created 2026-02-05 00:02:15.668170 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-05 00:02:15.668176 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.668184 | orchestrator | } 2026-02-05 00:02:15.668262 | orchestrator | 2026-02-05 00:02:15.668274 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-05 00:02:15.668280 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-05 00:02:15.668284 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.668288 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.668293 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.668297 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.668301 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.668305 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-05 00:02:15.668309 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.668313 | orchestrator | + size = 80 2026-02-05 00:02:15.668317 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.668322 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.668326 | orchestrator | } 2026-02-05 00:02:15.668433 | orchestrator | 2026-02-05 00:02:15.668449 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-05 00:02:15.668454 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 00:02:15.668458 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.668463 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.668467 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.668478 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.668483 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.668487 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-05 00:02:15.668491 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.668495 | orchestrator | + size = 80 2026-02-05 00:02:15.668500 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.668504 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.668508 | orchestrator | } 2026-02-05 00:02:15.668576 | orchestrator | 2026-02-05 00:02:15.668588 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-05 00:02:15.668593 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 00:02:15.668597 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.668601 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.668606 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.668610 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.668614 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.668618 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-05 00:02:15.668622 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.668626 | orchestrator | + size = 80 2026-02-05 00:02:15.668631 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.668635 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.668639 | orchestrator | } 2026-02-05 00:02:15.668812 | orchestrator | 2026-02-05 00:02:15.668829 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-05 00:02:15.668835 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 00:02:15.668849 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.668853 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.668858 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.668862 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.668866 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.668870 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-05 00:02:15.668875 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.668880 | orchestrator | + size = 80 2026-02-05 00:02:15.668885 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.668911 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.668916 | orchestrator | } 2026-02-05 00:02:15.668986 | orchestrator | 2026-02-05 00:02:15.668998 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-05 00:02:15.669003 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 00:02:15.669008 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.669012 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.669016 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.669020 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.669024 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.669035 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-05 00:02:15.669244 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.669255 | orchestrator | + size = 80 2026-02-05 00:02:15.669259 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.669264 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.669268 | orchestrator | } 2026-02-05 00:02:15.669412 | orchestrator | 2026-02-05 00:02:15.669429 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-05 00:02:15.669434 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 00:02:15.669439 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.669443 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.669447 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.669462 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.669466 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.669470 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-05 00:02:15.669474 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.669479 | orchestrator | + size = 80 2026-02-05 00:02:15.669483 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.669487 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.669491 | orchestrator | } 2026-02-05 00:02:15.669565 | orchestrator | 2026-02-05 00:02:15.669578 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-05 00:02:15.669583 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 00:02:15.669587 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.669591 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.669595 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.669599 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.669604 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.669608 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-05 00:02:15.669613 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.669617 | orchestrator | + size = 80 2026-02-05 00:02:15.669621 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.669626 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.669630 | orchestrator | } 2026-02-05 00:02:15.669699 | orchestrator | 2026-02-05 00:02:15.669711 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-05 00:02:15.673598 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:15.673621 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.673626 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.673630 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.673634 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.673639 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-05 00:02:15.673643 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.673647 | orchestrator | + size = 20 2026-02-05 00:02:15.673651 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.673655 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.673659 | orchestrator | } 2026-02-05 00:02:15.673779 | orchestrator | 2026-02-05 00:02:15.673794 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-05 00:02:15.673798 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:15.673803 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.673806 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.673810 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.673814 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.673818 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-05 00:02:15.673822 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.673826 | orchestrator | + size = 20 2026-02-05 00:02:15.673830 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.673834 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.673838 | orchestrator | } 2026-02-05 00:02:15.673904 | orchestrator | 2026-02-05 00:02:15.673917 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-05 00:02:15.673922 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:15.673926 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.673935 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.673939 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.673943 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.673947 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-05 00:02:15.673951 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.673965 | orchestrator | + size = 20 2026-02-05 00:02:15.673969 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.673973 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.673977 | orchestrator | } 2026-02-05 00:02:15.674085 | orchestrator | 2026-02-05 00:02:15.674098 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-05 00:02:15.674103 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:15.674107 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.674110 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.674114 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.674118 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.674122 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-05 00:02:15.674126 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.674130 | orchestrator | + size = 20 2026-02-05 00:02:15.674133 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.674137 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.674141 | orchestrator | } 2026-02-05 00:02:15.674256 | orchestrator | 2026-02-05 00:02:15.674269 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-05 00:02:15.674274 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:15.674278 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.674281 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.674285 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.674289 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.674293 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-05 00:02:15.674297 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.674310 | orchestrator | + size = 20 2026-02-05 00:02:15.674314 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.674317 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.674321 | orchestrator | } 2026-02-05 00:02:15.674460 | orchestrator | 2026-02-05 00:02:15.674473 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-05 00:02:15.674478 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:15.674481 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.674485 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.674523 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.674528 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.674532 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-05 00:02:15.674536 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.674540 | orchestrator | + size = 20 2026-02-05 00:02:15.674544 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.674548 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.674552 | orchestrator | } 2026-02-05 00:02:15.674617 | orchestrator | 2026-02-05 00:02:15.674629 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-05 00:02:15.674633 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:15.674637 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.674641 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.674645 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.674649 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.674652 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-05 00:02:15.674656 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.674660 | orchestrator | + size = 20 2026-02-05 00:02:15.674664 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.674668 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.674671 | orchestrator | } 2026-02-05 00:02:15.674734 | orchestrator | 2026-02-05 00:02:15.674746 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-05 00:02:15.674750 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:15.674761 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.674765 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.674769 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.674773 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.674777 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-05 00:02:15.674781 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.674784 | orchestrator | + size = 20 2026-02-05 00:02:15.674788 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.674792 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.674796 | orchestrator | } 2026-02-05 00:02:15.674855 | orchestrator | 2026-02-05 00:02:15.674867 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-05 00:02:15.674871 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:15.674875 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:15.674879 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.674883 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.674886 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:15.674890 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-05 00:02:15.674894 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.674898 | orchestrator | + size = 20 2026-02-05 00:02:15.674902 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:15.674905 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:15.674909 | orchestrator | } 2026-02-05 00:02:15.675120 | orchestrator | 2026-02-05 00:02:15.675133 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-05 00:02:15.675138 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-05 00:02:15.675142 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:15.675145 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:15.675149 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:15.675153 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.675157 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.675161 | orchestrator | + config_drive = true 2026-02-05 00:02:15.675164 | orchestrator | + created = (known after apply) 2026-02-05 00:02:15.675168 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:15.675172 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-05 00:02:15.675176 | orchestrator | + force_delete = false 2026-02-05 00:02:15.675179 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:15.675183 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.675187 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.675191 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:15.675195 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:15.675198 | orchestrator | + name = "testbed-manager" 2026-02-05 00:02:15.675202 | orchestrator | + power_state = "active" 2026-02-05 00:02:15.675206 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.675210 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:15.675213 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:15.675217 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:15.675221 | orchestrator | + user_data = (sensitive value) 2026-02-05 00:02:15.675225 | orchestrator | 2026-02-05 00:02:15.675229 | orchestrator | + block_device { 2026-02-05 00:02:15.675233 | orchestrator | + boot_index = 0 2026-02-05 00:02:15.675237 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:15.675245 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:15.675249 | orchestrator | + multiattach = false 2026-02-05 00:02:15.675253 | orchestrator | + source_type = "volume" 2026-02-05 00:02:15.675256 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.675396 | orchestrator | } 2026-02-05 00:02:15.675402 | orchestrator | 2026-02-05 00:02:15.675406 | orchestrator | + network { 2026-02-05 00:02:15.675410 | orchestrator | + access_network = false 2026-02-05 00:02:15.675413 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:15.675417 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:15.675421 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:15.675425 | orchestrator | + name = (known after apply) 2026-02-05 00:02:15.675429 | orchestrator | + port = (known after apply) 2026-02-05 00:02:15.675433 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.675436 | orchestrator | } 2026-02-05 00:02:15.675440 | orchestrator | } 2026-02-05 00:02:15.675700 | orchestrator | 2026-02-05 00:02:15.675716 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-05 00:02:15.675721 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 00:02:15.675725 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:15.675729 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:15.675733 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:15.675736 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.675740 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.675744 | orchestrator | + config_drive = true 2026-02-05 00:02:15.675748 | orchestrator | + created = (known after apply) 2026-02-05 00:02:15.675752 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:15.675756 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 00:02:15.675759 | orchestrator | + force_delete = false 2026-02-05 00:02:15.675763 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:15.675767 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.675771 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.675775 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:15.675779 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:15.675782 | orchestrator | + name = "testbed-node-0" 2026-02-05 00:02:15.675786 | orchestrator | + power_state = "active" 2026-02-05 00:02:15.675790 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.675794 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:15.675797 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:15.675801 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:15.675805 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 00:02:15.675809 | orchestrator | 2026-02-05 00:02:15.675813 | orchestrator | + block_device { 2026-02-05 00:02:15.675817 | orchestrator | + boot_index = 0 2026-02-05 00:02:15.675821 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:15.675824 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:15.675828 | orchestrator | + multiattach = false 2026-02-05 00:02:15.675832 | orchestrator | + source_type = "volume" 2026-02-05 00:02:15.675836 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.675840 | orchestrator | } 2026-02-05 00:02:15.675843 | orchestrator | 2026-02-05 00:02:15.675847 | orchestrator | + network { 2026-02-05 00:02:15.675851 | orchestrator | + access_network = false 2026-02-05 00:02:15.675855 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:15.675859 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:15.675863 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:15.675867 | orchestrator | + name = (known after apply) 2026-02-05 00:02:15.675870 | orchestrator | + port = (known after apply) 2026-02-05 00:02:15.675874 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.675878 | orchestrator | } 2026-02-05 00:02:15.675882 | orchestrator | } 2026-02-05 00:02:15.676066 | orchestrator | 2026-02-05 00:02:15.676077 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-05 00:02:15.676082 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 00:02:15.676086 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:15.676096 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:15.676100 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:15.676104 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.676108 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.676111 | orchestrator | + config_drive = true 2026-02-05 00:02:15.676115 | orchestrator | + created = (known after apply) 2026-02-05 00:02:15.676119 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:15.676123 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 00:02:15.676127 | orchestrator | + force_delete = false 2026-02-05 00:02:15.676130 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:15.676134 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.676138 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.676142 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:15.676145 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:15.676149 | orchestrator | + name = "testbed-node-1" 2026-02-05 00:02:15.676153 | orchestrator | + power_state = "active" 2026-02-05 00:02:15.676157 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.676161 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:15.676165 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:15.676169 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:15.676172 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 00:02:15.676176 | orchestrator | 2026-02-05 00:02:15.676180 | orchestrator | + block_device { 2026-02-05 00:02:15.676184 | orchestrator | + boot_index = 0 2026-02-05 00:02:15.676188 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:15.676191 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:15.676195 | orchestrator | + multiattach = false 2026-02-05 00:02:15.676199 | orchestrator | + source_type = "volume" 2026-02-05 00:02:15.676203 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.676207 | orchestrator | } 2026-02-05 00:02:15.676210 | orchestrator | 2026-02-05 00:02:15.676214 | orchestrator | + network { 2026-02-05 00:02:15.676218 | orchestrator | + access_network = false 2026-02-05 00:02:15.676222 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:15.676226 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:15.676230 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:15.676233 | orchestrator | + name = (known after apply) 2026-02-05 00:02:15.676237 | orchestrator | + port = (known after apply) 2026-02-05 00:02:15.676241 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.676245 | orchestrator | } 2026-02-05 00:02:15.676249 | orchestrator | } 2026-02-05 00:02:15.676552 | orchestrator | 2026-02-05 00:02:15.676578 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-05 00:02:15.676583 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 00:02:15.676587 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:15.676591 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:15.676596 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:15.676600 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.676611 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.676615 | orchestrator | + config_drive = true 2026-02-05 00:02:15.676619 | orchestrator | + created = (known after apply) 2026-02-05 00:02:15.676623 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:15.676627 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 00:02:15.676630 | orchestrator | + force_delete = false 2026-02-05 00:02:15.676634 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:15.676638 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.676641 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.676651 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:15.676655 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:15.676658 | orchestrator | + name = "testbed-node-2" 2026-02-05 00:02:15.676662 | orchestrator | + power_state = "active" 2026-02-05 00:02:15.676666 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.676670 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:15.676673 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:15.676677 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:15.676681 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 00:02:15.676685 | orchestrator | 2026-02-05 00:02:15.676688 | orchestrator | + block_device { 2026-02-05 00:02:15.676692 | orchestrator | + boot_index = 0 2026-02-05 00:02:15.676696 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:15.676700 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:15.676704 | orchestrator | + multiattach = false 2026-02-05 00:02:15.676707 | orchestrator | + source_type = "volume" 2026-02-05 00:02:15.676711 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.676715 | orchestrator | } 2026-02-05 00:02:15.676719 | orchestrator | 2026-02-05 00:02:15.676723 | orchestrator | + network { 2026-02-05 00:02:15.676726 | orchestrator | + access_network = false 2026-02-05 00:02:15.676730 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:15.676734 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:15.676738 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:15.676741 | orchestrator | + name = (known after apply) 2026-02-05 00:02:15.676745 | orchestrator | + port = (known after apply) 2026-02-05 00:02:15.676749 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.676753 | orchestrator | } 2026-02-05 00:02:15.676757 | orchestrator | } 2026-02-05 00:02:15.677008 | orchestrator | 2026-02-05 00:02:15.677021 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-05 00:02:15.677025 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 00:02:15.677029 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:15.677033 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:15.677037 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:15.677041 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.677045 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.677048 | orchestrator | + config_drive = true 2026-02-05 00:02:15.677052 | orchestrator | + created = (known after apply) 2026-02-05 00:02:15.677056 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:15.677060 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 00:02:15.677064 | orchestrator | + force_delete = false 2026-02-05 00:02:15.677067 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:15.677071 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.677075 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.677079 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:15.677083 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:15.677087 | orchestrator | + name = "testbed-node-3" 2026-02-05 00:02:15.677090 | orchestrator | + power_state = "active" 2026-02-05 00:02:15.677094 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.677098 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:15.677102 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:15.677105 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:15.677109 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 00:02:15.677113 | orchestrator | 2026-02-05 00:02:15.677117 | orchestrator | + block_device { 2026-02-05 00:02:15.677124 | orchestrator | + boot_index = 0 2026-02-05 00:02:15.677128 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:15.677131 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:15.677139 | orchestrator | + multiattach = false 2026-02-05 00:02:15.677143 | orchestrator | + source_type = "volume" 2026-02-05 00:02:15.677147 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.677151 | orchestrator | } 2026-02-05 00:02:15.677155 | orchestrator | 2026-02-05 00:02:15.677158 | orchestrator | + network { 2026-02-05 00:02:15.677172 | orchestrator | + access_network = false 2026-02-05 00:02:15.677176 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:15.677180 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:15.677184 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:15.677187 | orchestrator | + name = (known after apply) 2026-02-05 00:02:15.677191 | orchestrator | + port = (known after apply) 2026-02-05 00:02:15.677195 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.677199 | orchestrator | } 2026-02-05 00:02:15.677202 | orchestrator | } 2026-02-05 00:02:15.677415 | orchestrator | 2026-02-05 00:02:15.677428 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-05 00:02:15.677433 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 00:02:15.677437 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:15.677441 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:15.677445 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:15.677449 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.677453 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.677456 | orchestrator | + config_drive = true 2026-02-05 00:02:15.677460 | orchestrator | + created = (known after apply) 2026-02-05 00:02:15.677464 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:15.677468 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 00:02:15.677472 | orchestrator | + force_delete = false 2026-02-05 00:02:15.677476 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:15.677479 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.677483 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.677487 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:15.677491 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:15.677494 | orchestrator | + name = "testbed-node-4" 2026-02-05 00:02:15.677498 | orchestrator | + power_state = "active" 2026-02-05 00:02:15.677502 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.677506 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:15.677510 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:15.677513 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:15.677517 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 00:02:15.677521 | orchestrator | 2026-02-05 00:02:15.677525 | orchestrator | + block_device { 2026-02-05 00:02:15.677529 | orchestrator | + boot_index = 0 2026-02-05 00:02:15.677532 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:15.677536 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:15.677540 | orchestrator | + multiattach = false 2026-02-05 00:02:15.677544 | orchestrator | + source_type = "volume" 2026-02-05 00:02:15.677547 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.677551 | orchestrator | } 2026-02-05 00:02:15.677555 | orchestrator | 2026-02-05 00:02:15.677559 | orchestrator | + network { 2026-02-05 00:02:15.677563 | orchestrator | + access_network = false 2026-02-05 00:02:15.677566 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:15.677570 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:15.677574 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:15.677578 | orchestrator | + name = (known after apply) 2026-02-05 00:02:15.677581 | orchestrator | + port = (known after apply) 2026-02-05 00:02:15.677585 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.677589 | orchestrator | } 2026-02-05 00:02:15.677593 | orchestrator | } 2026-02-05 00:02:15.677779 | orchestrator | 2026-02-05 00:02:15.677792 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-05 00:02:15.677797 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 00:02:15.677800 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:15.677804 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:15.677808 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:15.677812 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.677816 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:15.677820 | orchestrator | + config_drive = true 2026-02-05 00:02:15.677823 | orchestrator | + created = (known after apply) 2026-02-05 00:02:15.677827 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:15.677831 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 00:02:15.677835 | orchestrator | + force_delete = false 2026-02-05 00:02:15.677842 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:15.677846 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.677850 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:15.677854 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:15.677857 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:15.677861 | orchestrator | + name = "testbed-node-5" 2026-02-05 00:02:15.677865 | orchestrator | + power_state = "active" 2026-02-05 00:02:15.677869 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.677873 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:15.677877 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:15.677880 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:15.677884 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 00:02:15.677888 | orchestrator | 2026-02-05 00:02:15.677892 | orchestrator | + block_device { 2026-02-05 00:02:15.677896 | orchestrator | + boot_index = 0 2026-02-05 00:02:15.677899 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:15.677903 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:15.677907 | orchestrator | + multiattach = false 2026-02-05 00:02:15.677911 | orchestrator | + source_type = "volume" 2026-02-05 00:02:15.677915 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.677918 | orchestrator | } 2026-02-05 00:02:15.677922 | orchestrator | 2026-02-05 00:02:15.677926 | orchestrator | + network { 2026-02-05 00:02:15.677930 | orchestrator | + access_network = false 2026-02-05 00:02:15.677934 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:15.677937 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:15.677941 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:15.677945 | orchestrator | + name = (known after apply) 2026-02-05 00:02:15.677949 | orchestrator | + port = (known after apply) 2026-02-05 00:02:15.677953 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:15.677956 | orchestrator | } 2026-02-05 00:02:15.677960 | orchestrator | } 2026-02-05 00:02:15.678005 | orchestrator | 2026-02-05 00:02:15.678039 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-05 00:02:15.678044 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-05 00:02:15.678048 | orchestrator | + fingerprint = (known after apply) 2026-02-05 00:02:15.678052 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.678056 | orchestrator | + name = "testbed" 2026-02-05 00:02:15.678059 | orchestrator | + private_key = (sensitive value) 2026-02-05 00:02:15.678063 | orchestrator | + public_key = (known after apply) 2026-02-05 00:02:15.678067 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.678071 | orchestrator | + user_id = (known after apply) 2026-02-05 00:02:15.678075 | orchestrator | } 2026-02-05 00:02:15.678116 | orchestrator | 2026-02-05 00:02:15.678127 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-05 00:02:15.678131 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:15.678140 | orchestrator | + device = (known after apply) 2026-02-05 00:02:15.678144 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.678148 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:15.678151 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.678155 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:15.678159 | orchestrator | } 2026-02-05 00:02:15.678197 | orchestrator | 2026-02-05 00:02:15.678208 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-05 00:02:15.678212 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:15.678216 | orchestrator | + device = (known after apply) 2026-02-05 00:02:15.678220 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.678224 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:15.678228 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.678231 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:15.678235 | orchestrator | } 2026-02-05 00:02:15.678274 | orchestrator | 2026-02-05 00:02:15.678284 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-05 00:02:15.678289 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:15.678293 | orchestrator | + device = (known after apply) 2026-02-05 00:02:15.678296 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.678300 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:15.678304 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.678308 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:15.678312 | orchestrator | } 2026-02-05 00:02:15.678346 | orchestrator | 2026-02-05 00:02:15.678357 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-05 00:02:15.678376 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:15.678380 | orchestrator | + device = (known after apply) 2026-02-05 00:02:15.678384 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.678387 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:15.678391 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.678395 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:15.678399 | orchestrator | } 2026-02-05 00:02:15.678447 | orchestrator | 2026-02-05 00:02:15.678458 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-05 00:02:15.678462 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:15.678467 | orchestrator | + device = (known after apply) 2026-02-05 00:02:15.678470 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.678474 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:15.678484 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.678488 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:15.678492 | orchestrator | } 2026-02-05 00:02:15.678525 | orchestrator | 2026-02-05 00:02:15.678536 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-05 00:02:15.678540 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:15.678544 | orchestrator | + device = (known after apply) 2026-02-05 00:02:15.678548 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.678552 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:15.678555 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.678559 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:15.678563 | orchestrator | } 2026-02-05 00:02:15.678601 | orchestrator | 2026-02-05 00:02:15.678612 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-05 00:02:15.678616 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:15.678620 | orchestrator | + device = (known after apply) 2026-02-05 00:02:15.678624 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.678628 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:15.678632 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.678639 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:15.678643 | orchestrator | } 2026-02-05 00:02:15.678680 | orchestrator | 2026-02-05 00:02:15.678691 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-05 00:02:15.678695 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:15.678699 | orchestrator | + device = (known after apply) 2026-02-05 00:02:15.678703 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.678707 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:15.678711 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.678715 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:15.678719 | orchestrator | } 2026-02-05 00:02:15.678752 | orchestrator | 2026-02-05 00:02:15.678762 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-05 00:02:15.678766 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:15.678770 | orchestrator | + device = (known after apply) 2026-02-05 00:02:15.678774 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.678778 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:15.678782 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.678785 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:15.678789 | orchestrator | } 2026-02-05 00:02:15.678828 | orchestrator | 2026-02-05 00:02:15.678838 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-05 00:02:15.678844 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-05 00:02:15.678847 | orchestrator | + fixed_ip = (known after apply) 2026-02-05 00:02:15.678851 | orchestrator | + floating_ip = (known after apply) 2026-02-05 00:02:15.678855 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.678859 | orchestrator | + port_id = (known after apply) 2026-02-05 00:02:15.678863 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.678866 | orchestrator | } 2026-02-05 00:02:15.678955 | orchestrator | 2026-02-05 00:02:15.678967 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-05 00:02:15.678972 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-05 00:02:15.678976 | orchestrator | + address = (known after apply) 2026-02-05 00:02:15.678980 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.678983 | orchestrator | + dns_domain = (known after apply) 2026-02-05 00:02:15.678987 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:15.678991 | orchestrator | + fixed_ip = (known after apply) 2026-02-05 00:02:15.678995 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.678999 | orchestrator | + pool = "public" 2026-02-05 00:02:15.679003 | orchestrator | + port_id = (known after apply) 2026-02-05 00:02:15.679007 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.679010 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:15.679014 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.679018 | orchestrator | } 2026-02-05 00:02:15.679106 | orchestrator | 2026-02-05 00:02:15.679117 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-05 00:02:15.679122 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-05 00:02:15.679126 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:15.679130 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.679133 | orchestrator | + availability_zone_hints = [ 2026-02-05 00:02:15.679137 | orchestrator | + "nova", 2026-02-05 00:02:15.679141 | orchestrator | ] 2026-02-05 00:02:15.679145 | orchestrator | + dns_domain = (known after apply) 2026-02-05 00:02:15.679149 | orchestrator | + external = (known after apply) 2026-02-05 00:02:15.679153 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.679156 | orchestrator | + mtu = (known after apply) 2026-02-05 00:02:15.679160 | orchestrator | + name = "net-testbed-management" 2026-02-05 00:02:15.679164 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:15.679171 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:15.679176 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.679179 | orchestrator | + shared = (known after apply) 2026-02-05 00:02:15.679183 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.679187 | orchestrator | + transparent_vlan = (known after apply) 2026-02-05 00:02:15.679191 | orchestrator | 2026-02-05 00:02:15.679195 | orchestrator | + segments (known after apply) 2026-02-05 00:02:15.679199 | orchestrator | } 2026-02-05 00:02:15.679334 | orchestrator | 2026-02-05 00:02:15.679348 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-05 00:02:15.679353 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-05 00:02:15.679356 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:15.679376 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:15.679380 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:15.679387 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.679391 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:15.679395 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:15.679399 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:15.679403 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:15.679406 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.679410 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:15.679414 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:15.679418 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:15.679421 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:15.679425 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.679429 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:15.679433 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.679437 | orchestrator | 2026-02-05 00:02:15.679440 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.679444 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:15.679448 | orchestrator | } 2026-02-05 00:02:15.679452 | orchestrator | 2026-02-05 00:02:15.679456 | orchestrator | + binding (known after apply) 2026-02-05 00:02:15.679460 | orchestrator | 2026-02-05 00:02:15.679464 | orchestrator | + fixed_ip { 2026-02-05 00:02:15.679468 | orchestrator | + ip_address = "192.168.16.5" 2026-02-05 00:02:15.679472 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:15.679475 | orchestrator | } 2026-02-05 00:02:15.679479 | orchestrator | } 2026-02-05 00:02:15.679623 | orchestrator | 2026-02-05 00:02:15.679635 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-05 00:02:15.679639 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 00:02:15.679643 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:15.679647 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:15.679651 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:15.679654 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.679658 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:15.679662 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:15.679666 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:15.679669 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:15.679673 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.679677 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:15.679681 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:15.679687 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:15.679694 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:15.679698 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.679706 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:15.679710 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.679714 | orchestrator | 2026-02-05 00:02:15.679718 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.679722 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 00:02:15.679726 | orchestrator | } 2026-02-05 00:02:15.679729 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.679733 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:15.679737 | orchestrator | } 2026-02-05 00:02:15.679741 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.679745 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 00:02:15.679749 | orchestrator | } 2026-02-05 00:02:15.679752 | orchestrator | 2026-02-05 00:02:15.679756 | orchestrator | + binding (known after apply) 2026-02-05 00:02:15.679760 | orchestrator | 2026-02-05 00:02:15.679764 | orchestrator | + fixed_ip { 2026-02-05 00:02:15.679768 | orchestrator | + ip_address = "192.168.16.10" 2026-02-05 00:02:15.679772 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:15.679775 | orchestrator | } 2026-02-05 00:02:15.679779 | orchestrator | } 2026-02-05 00:02:15.679917 | orchestrator | 2026-02-05 00:02:15.679930 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-05 00:02:15.679935 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 00:02:15.679939 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:15.679943 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:15.679947 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:15.679950 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.679954 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:15.679958 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:15.679962 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:15.679966 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:15.679970 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.679973 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:15.679977 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:15.679981 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:15.679985 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:15.679988 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.679992 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:15.679996 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.680000 | orchestrator | 2026-02-05 00:02:15.680003 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.680007 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 00:02:15.680011 | orchestrator | } 2026-02-05 00:02:15.680015 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.680019 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:15.680022 | orchestrator | } 2026-02-05 00:02:15.680026 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.680030 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 00:02:15.680034 | orchestrator | } 2026-02-05 00:02:15.680038 | orchestrator | 2026-02-05 00:02:15.680042 | orchestrator | + binding (known after apply) 2026-02-05 00:02:15.680045 | orchestrator | 2026-02-05 00:02:15.680049 | orchestrator | + fixed_ip { 2026-02-05 00:02:15.680053 | orchestrator | + ip_address = "192.168.16.11" 2026-02-05 00:02:15.680057 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:15.680061 | orchestrator | } 2026-02-05 00:02:15.680064 | orchestrator | } 2026-02-05 00:02:15.680204 | orchestrator | 2026-02-05 00:02:15.680216 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-05 00:02:15.680220 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 00:02:15.680224 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:15.680228 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:15.680231 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:15.680235 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.680243 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:15.680247 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:15.680250 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:15.680254 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:15.680261 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.680265 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:15.680269 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:15.680273 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:15.680277 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:15.680281 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.680285 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:15.680288 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.680292 | orchestrator | 2026-02-05 00:02:15.680296 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.680300 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 00:02:15.680304 | orchestrator | } 2026-02-05 00:02:15.680307 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.680311 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:15.680315 | orchestrator | } 2026-02-05 00:02:15.680319 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.680323 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 00:02:15.680327 | orchestrator | } 2026-02-05 00:02:15.680330 | orchestrator | 2026-02-05 00:02:15.680334 | orchestrator | + binding (known after apply) 2026-02-05 00:02:15.680338 | orchestrator | 2026-02-05 00:02:15.680342 | orchestrator | + fixed_ip { 2026-02-05 00:02:15.680346 | orchestrator | + ip_address = "192.168.16.12" 2026-02-05 00:02:15.680349 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:15.680353 | orchestrator | } 2026-02-05 00:02:15.680357 | orchestrator | } 2026-02-05 00:02:15.680553 | orchestrator | 2026-02-05 00:02:15.680564 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-05 00:02:15.680569 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 00:02:15.680573 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:15.680577 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:15.680581 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:15.680585 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.680588 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:15.680592 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:15.680596 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:15.680600 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:15.680604 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.680608 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:15.680611 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:15.680615 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:15.680619 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:15.680623 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.680627 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:15.680630 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.680634 | orchestrator | 2026-02-05 00:02:15.680638 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.680642 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 00:02:15.680646 | orchestrator | } 2026-02-05 00:02:15.680650 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.680654 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:15.680657 | orchestrator | } 2026-02-05 00:02:15.680661 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.680665 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 00:02:15.680669 | orchestrator | } 2026-02-05 00:02:15.680672 | orchestrator | 2026-02-05 00:02:15.680680 | orchestrator | + binding (known after apply) 2026-02-05 00:02:15.680684 | orchestrator | 2026-02-05 00:02:15.680688 | orchestrator | + fixed_ip { 2026-02-05 00:02:15.680692 | orchestrator | + ip_address = "192.168.16.13" 2026-02-05 00:02:15.680696 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:15.680699 | orchestrator | } 2026-02-05 00:02:15.680703 | orchestrator | } 2026-02-05 00:02:15.680835 | orchestrator | 2026-02-05 00:02:15.680847 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-05 00:02:15.680851 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 00:02:15.680855 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:15.680859 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:15.680862 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:15.680866 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.680870 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:15.680874 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:15.680878 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:15.680881 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:15.680885 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.680889 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:15.680893 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:15.680897 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:15.680901 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:15.680904 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.680908 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:15.680912 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.680917 | orchestrator | 2026-02-05 00:02:15.680921 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.680925 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 00:02:15.680929 | orchestrator | } 2026-02-05 00:02:15.680932 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.680936 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:15.680940 | orchestrator | } 2026-02-05 00:02:15.680944 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.680947 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 00:02:15.680951 | orchestrator | } 2026-02-05 00:02:15.680955 | orchestrator | 2026-02-05 00:02:15.680959 | orchestrator | + binding (known after apply) 2026-02-05 00:02:15.680963 | orchestrator | 2026-02-05 00:02:15.680966 | orchestrator | + fixed_ip { 2026-02-05 00:02:15.680970 | orchestrator | + ip_address = "192.168.16.14" 2026-02-05 00:02:15.680974 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:15.680978 | orchestrator | } 2026-02-05 00:02:15.680982 | orchestrator | } 2026-02-05 00:02:15.681109 | orchestrator | 2026-02-05 00:02:15.681121 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-05 00:02:15.681125 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 00:02:15.681129 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:15.681133 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:15.681136 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:15.681140 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.681144 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:15.681148 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:15.681152 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:15.681156 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:15.681159 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.681163 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:15.681167 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:15.681171 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:15.681175 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:15.681183 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.681187 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:15.681191 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.681195 | orchestrator | 2026-02-05 00:02:15.681199 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.681203 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 00:02:15.681207 | orchestrator | } 2026-02-05 00:02:15.681210 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.681214 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:15.681218 | orchestrator | } 2026-02-05 00:02:15.681222 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:15.681225 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 00:02:15.681229 | orchestrator | } 2026-02-05 00:02:15.681233 | orchestrator | 2026-02-05 00:02:15.681241 | orchestrator | + binding (known after apply) 2026-02-05 00:02:15.681245 | orchestrator | 2026-02-05 00:02:15.681249 | orchestrator | + fixed_ip { 2026-02-05 00:02:15.681253 | orchestrator | + ip_address = "192.168.16.15" 2026-02-05 00:02:15.681257 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:15.681260 | orchestrator | } 2026-02-05 00:02:15.681264 | orchestrator | } 2026-02-05 00:02:15.681306 | orchestrator | 2026-02-05 00:02:15.681318 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-05 00:02:15.681322 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-05 00:02:15.681326 | orchestrator | + force_destroy = false 2026-02-05 00:02:15.681330 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.681334 | orchestrator | + port_id = (known after apply) 2026-02-05 00:02:15.681338 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.681342 | orchestrator | + router_id = (known after apply) 2026-02-05 00:02:15.681345 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:15.681349 | orchestrator | } 2026-02-05 00:02:15.681473 | orchestrator | 2026-02-05 00:02:15.681488 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-05 00:02:15.681492 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-05 00:02:15.681496 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:15.681500 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.681504 | orchestrator | + availability_zone_hints = [ 2026-02-05 00:02:15.681508 | orchestrator | + "nova", 2026-02-05 00:02:15.681512 | orchestrator | ] 2026-02-05 00:02:15.681515 | orchestrator | + distributed = (known after apply) 2026-02-05 00:02:15.681519 | orchestrator | + enable_snat = (known after apply) 2026-02-05 00:02:15.681523 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-05 00:02:15.681527 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-05 00:02:15.681531 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.681534 | orchestrator | + name = "testbed" 2026-02-05 00:02:15.681538 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.681542 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.681546 | orchestrator | 2026-02-05 00:02:15.681550 | orchestrator | + external_fixed_ip (known after apply) 2026-02-05 00:02:15.681554 | orchestrator | } 2026-02-05 00:02:15.681636 | orchestrator | 2026-02-05 00:02:15.681648 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-05 00:02:15.681654 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-05 00:02:15.681658 | orchestrator | + description = "ssh" 2026-02-05 00:02:15.681662 | orchestrator | + direction = "ingress" 2026-02-05 00:02:15.681666 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:15.681670 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.681674 | orchestrator | + port_range_max = 22 2026-02-05 00:02:15.681677 | orchestrator | + port_range_min = 22 2026-02-05 00:02:15.681681 | orchestrator | + protocol = "tcp" 2026-02-05 00:02:15.681685 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.681695 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:15.681699 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:15.681703 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:15.681707 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:15.681711 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.681714 | orchestrator | } 2026-02-05 00:02:15.681796 | orchestrator | 2026-02-05 00:02:15.681812 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-05 00:02:15.681816 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-05 00:02:15.681820 | orchestrator | + description = "wireguard" 2026-02-05 00:02:15.681824 | orchestrator | + direction = "ingress" 2026-02-05 00:02:15.681828 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:15.681832 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.681836 | orchestrator | + port_range_max = 51820 2026-02-05 00:02:15.681840 | orchestrator | + port_range_min = 51820 2026-02-05 00:02:15.681843 | orchestrator | + protocol = "udp" 2026-02-05 00:02:15.681847 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.681851 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:15.681855 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:15.681859 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:15.681863 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:15.681866 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.681870 | orchestrator | } 2026-02-05 00:02:15.681936 | orchestrator | 2026-02-05 00:02:15.681948 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-05 00:02:15.681952 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-05 00:02:15.681956 | orchestrator | + direction = "ingress" 2026-02-05 00:02:15.681960 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:15.681963 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.681967 | orchestrator | + protocol = "tcp" 2026-02-05 00:02:15.681971 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.681975 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:15.681979 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:15.681983 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-05 00:02:15.681986 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:15.681990 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.681994 | orchestrator | } 2026-02-05 00:02:15.682079 | orchestrator | 2026-02-05 00:02:15.682092 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-05 00:02:15.682097 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-05 00:02:15.682101 | orchestrator | + direction = "ingress" 2026-02-05 00:02:15.682104 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:15.682108 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.682112 | orchestrator | + protocol = "udp" 2026-02-05 00:02:15.682116 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.682119 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:15.682123 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:15.682127 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-05 00:02:15.682131 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:15.682135 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.682138 | orchestrator | } 2026-02-05 00:02:15.682202 | orchestrator | 2026-02-05 00:02:15.682213 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-05 00:02:15.682223 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-05 00:02:15.682227 | orchestrator | + direction = "ingress" 2026-02-05 00:02:15.682231 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:15.682234 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.682238 | orchestrator | + protocol = "icmp" 2026-02-05 00:02:15.682242 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.682246 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:15.682250 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:15.682254 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:15.682258 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:15.682261 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.682265 | orchestrator | } 2026-02-05 00:02:15.682337 | orchestrator | 2026-02-05 00:02:15.682349 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-05 00:02:15.682353 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-05 00:02:15.682357 | orchestrator | + direction = "ingress" 2026-02-05 00:02:15.682381 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:15.682386 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.682389 | orchestrator | + protocol = "tcp" 2026-02-05 00:02:15.682393 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.682397 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:15.682405 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:15.682409 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:15.682413 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:15.682416 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.682420 | orchestrator | } 2026-02-05 00:02:15.682499 | orchestrator | 2026-02-05 00:02:15.682510 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-05 00:02:15.682515 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-05 00:02:15.682519 | orchestrator | + direction = "ingress" 2026-02-05 00:02:15.682523 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:15.682526 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.682530 | orchestrator | + protocol = "udp" 2026-02-05 00:02:15.682534 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.682538 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:15.682542 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:15.682546 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:15.682549 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:15.682553 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.682557 | orchestrator | } 2026-02-05 00:02:15.682620 | orchestrator | 2026-02-05 00:02:15.682632 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-05 00:02:15.682636 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-05 00:02:15.682640 | orchestrator | + direction = "ingress" 2026-02-05 00:02:15.682647 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:15.682651 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.682655 | orchestrator | + protocol = "icmp" 2026-02-05 00:02:15.682659 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.682662 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:15.682666 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:15.682670 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:15.682674 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:15.682678 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.682686 | orchestrator | } 2026-02-05 00:02:15.682759 | orchestrator | 2026-02-05 00:02:15.682770 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-05 00:02:15.682775 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-05 00:02:15.682779 | orchestrator | + description = "vrrp" 2026-02-05 00:02:15.682782 | orchestrator | + direction = "ingress" 2026-02-05 00:02:15.682786 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:15.682790 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.682794 | orchestrator | + protocol = "112" 2026-02-05 00:02:15.682798 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.682801 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:15.682805 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:15.682809 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:15.682813 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:15.682817 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.682821 | orchestrator | } 2026-02-05 00:02:15.682869 | orchestrator | 2026-02-05 00:02:15.682880 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-05 00:02:15.682884 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-05 00:02:15.682888 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.682892 | orchestrator | + description = "management security group" 2026-02-05 00:02:15.682896 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.682900 | orchestrator | + name = "testbed-management" 2026-02-05 00:02:15.682903 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.682907 | orchestrator | + stateful = (known after apply) 2026-02-05 00:02:15.682911 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.682915 | orchestrator | } 2026-02-05 00:02:15.682962 | orchestrator | 2026-02-05 00:02:15.682973 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-05 00:02:15.682978 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-05 00:02:15.682982 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.682985 | orchestrator | + description = "node security group" 2026-02-05 00:02:15.682989 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.682993 | orchestrator | + name = "testbed-node" 2026-02-05 00:02:15.682997 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.683001 | orchestrator | + stateful = (known after apply) 2026-02-05 00:02:15.683005 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.683008 | orchestrator | } 2026-02-05 00:02:15.683116 | orchestrator | 2026-02-05 00:02:15.683129 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-05 00:02:15.683133 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-05 00:02:15.683137 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:15.683141 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-05 00:02:15.683145 | orchestrator | + dns_nameservers = [ 2026-02-05 00:02:15.683149 | orchestrator | + "8.8.8.8", 2026-02-05 00:02:15.683153 | orchestrator | + "9.9.9.9", 2026-02-05 00:02:15.683157 | orchestrator | ] 2026-02-05 00:02:15.683161 | orchestrator | + enable_dhcp = true 2026-02-05 00:02:15.683165 | orchestrator | + gateway_ip = (known after apply) 2026-02-05 00:02:15.683169 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.683172 | orchestrator | + ip_version = 4 2026-02-05 00:02:15.683176 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-05 00:02:15.683180 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-05 00:02:15.683184 | orchestrator | + name = "subnet-testbed-management" 2026-02-05 00:02:15.683188 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:15.683192 | orchestrator | + no_gateway = false 2026-02-05 00:02:15.683195 | orchestrator | + region = (known after apply) 2026-02-05 00:02:15.683199 | orchestrator | + service_types = (known after apply) 2026-02-05 00:02:15.683207 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:15.683211 | orchestrator | 2026-02-05 00:02:15.683215 | orchestrator | + allocation_pool { 2026-02-05 00:02:15.683219 | orchestrator | + end = "192.168.31.250" 2026-02-05 00:02:15.683223 | orchestrator | + start = "192.168.31.200" 2026-02-05 00:02:15.683227 | orchestrator | } 2026-02-05 00:02:15.683230 | orchestrator | } 2026-02-05 00:02:15.683262 | orchestrator | 2026-02-05 00:02:15.683273 | orchestrator | # terraform_data.image will be created 2026-02-05 00:02:15.683278 | orchestrator | + resource "terraform_data" "image" { 2026-02-05 00:02:15.683282 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.683286 | orchestrator | + input = "Ubuntu 24.04" 2026-02-05 00:02:15.683289 | orchestrator | + output = (known after apply) 2026-02-05 00:02:15.683293 | orchestrator | } 2026-02-05 00:02:15.683324 | orchestrator | 2026-02-05 00:02:15.683335 | orchestrator | # terraform_data.image_node will be created 2026-02-05 00:02:15.683339 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-05 00:02:15.683343 | orchestrator | + id = (known after apply) 2026-02-05 00:02:15.683347 | orchestrator | + input = "Ubuntu 24.04" 2026-02-05 00:02:15.683351 | orchestrator | + output = (known after apply) 2026-02-05 00:02:15.683355 | orchestrator | } 2026-02-05 00:02:15.683386 | orchestrator | 2026-02-05 00:02:15.683392 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-05 00:02:15.683403 | orchestrator | 2026-02-05 00:02:15.683408 | orchestrator | Changes to Outputs: 2026-02-05 00:02:15.683419 | orchestrator | + manager_address = (sensitive value) 2026-02-05 00:02:15.683423 | orchestrator | + private_key = (sensitive value) 2026-02-05 00:02:15.886109 | orchestrator | terraform_data.image: Creating... 2026-02-05 00:02:16.360656 | orchestrator | terraform_data.image: Creation complete after 0s [id=2136e58a-63c9-9641-0280-e4d73622af07] 2026-02-05 00:02:16.361203 | orchestrator | terraform_data.image_node: Creating... 2026-02-05 00:02:16.361609 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=a0343c7d-4f94-080e-81cd-6cf5ff6b227d] 2026-02-05 00:02:16.382235 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-05 00:02:16.383607 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-05 00:02:16.394319 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-05 00:02:16.394527 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-05 00:02:16.396262 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-05 00:02:16.396589 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-05 00:02:16.398132 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-05 00:02:16.400083 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-05 00:02:16.402068 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-05 00:02:16.403267 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-05 00:02:16.871028 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-05 00:02:16.877628 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-05 00:02:16.890697 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-05 00:02:16.896794 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-05 00:02:16.931849 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-02-05 00:02:16.938486 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-05 00:02:18.194122 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=72f65c26-1476-423c-8206-fb1b80d81518] 2026-02-05 00:02:18.206612 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-05 00:02:20.107137 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=c8222ed3-0da2-4bb4-b170-21b6f36ecb8d] 2026-02-05 00:02:20.479273 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-05 00:02:20.479400 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=33f37d33-b22b-44c3-8624-6074b4bf08c3] 2026-02-05 00:02:20.479428 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-05 00:02:20.479486 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=9acd2af8-1818-4377-bd1d-628102e352cb] 2026-02-05 00:02:20.479508 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=7f67b6e9-f99c-4354-902d-31e3a3988722] 2026-02-05 00:02:20.479529 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=3c61f4596590029cdec07baa6f667a61c89de812] 2026-02-05 00:02:20.479552 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-05 00:02:20.479564 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-05 00:02:20.479574 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-05 00:02:20.479585 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=a3293e5b-f1f9-462e-9781-4b1b679aef30] 2026-02-05 00:02:20.479596 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-05 00:02:20.479610 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=d601120f-cbb3-4953-a30b-917ccea713c0] 2026-02-05 00:02:20.479626 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-05 00:02:20.479644 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=b7f472c8-b527-47c9-ac56-62f6f3e84fbf] 2026-02-05 00:02:20.479662 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-05 00:02:20.479680 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=0f4e2151-cc71-4085-93f0-18395b8a78d9] 2026-02-05 00:02:20.479698 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=8b3d09cc431caaba8f9d67894d5883561d7dc45c] 2026-02-05 00:02:20.479717 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-05 00:02:20.479736 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=e6da1746-b16d-4279-a6c0-a95c954f705d] 2026-02-05 00:02:21.186106 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=140d076c-25ba-4c7f-981e-4ccc07f56e89] 2026-02-05 00:02:21.195191 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-05 00:02:21.744128 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=774239dd-62d4-45b8-8e64-6e1897c586b6] 2026-02-05 00:02:23.557802 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=ae798e57-6294-4077-9df2-d289d5b267fa] 2026-02-05 00:02:23.576179 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=ae8576b0-3518-4bda-8316-c370e1678e8f] 2026-02-05 00:02:23.601786 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=2e93b330-8072-4b50-a022-e1b5f3f4b47f] 2026-02-05 00:02:23.643190 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=249af197-fbc4-4070-877c-ae28488f0fb3] 2026-02-05 00:02:23.700788 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=d26fcf07-a835-4e26-a700-2c8fd3601c19] 2026-02-05 00:02:23.704597 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f] 2026-02-05 00:02:24.311432 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=7c2770b7-3cb3-4f09-afc7-7064a79d6e24] 2026-02-05 00:02:24.328494 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-05 00:02:24.639451 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-05 00:02:24.639609 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-05 00:02:24.639621 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=a5a79b8e-e088-444d-951c-d5930bca3c4e] 2026-02-05 00:02:24.639627 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-05 00:02:24.639633 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-05 00:02:24.639679 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-05 00:02:24.639718 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-05 00:02:24.639723 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-05 00:02:24.639729 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-05 00:02:24.705629 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=b6d9e075-b2b5-4619-81c9-5607434bd4e3] 2026-02-05 00:02:24.712513 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-05 00:02:24.713541 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-05 00:02:24.718996 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-05 00:02:24.795953 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=f8f6d511-e9ad-494a-97b3-900123fb5adc] 2026-02-05 00:02:24.808598 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-05 00:02:24.951877 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=87e12ba9-97ae-4462-8b13-9afd672c811f] 2026-02-05 00:02:24.970832 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-05 00:02:25.112495 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=2dac3f9e-be6c-4bfd-b8b1-834f048464f0] 2026-02-05 00:02:25.127404 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-05 00:02:25.292912 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=3c01525f-2b2a-418c-a445-e1100caa48da] 2026-02-05 00:02:25.310818 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-05 00:02:25.379933 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=71cb8696-26aa-4165-b421-fbfb6c641aa5] 2026-02-05 00:02:25.389020 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-05 00:02:25.422661 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=78a08280-fb73-47a4-a638-37f46044aa12] 2026-02-05 00:02:25.427261 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=314e84d2-8ffb-4501-ab1f-8c59782dfb75] 2026-02-05 00:02:25.428849 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-05 00:02:25.434501 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-05 00:02:25.481105 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=ed0f3b4a-3f13-4341-895d-615ef05875c0] 2026-02-05 00:02:25.570961 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=1415758e-b897-43c6-8ed2-f4ed2dcf3aaf] 2026-02-05 00:02:25.614422 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=c03d6528-b440-4839-819e-064cac0be250] 2026-02-05 00:02:25.731909 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=7036252b-423d-4b35-99a9-8469c5be9c41] 2026-02-05 00:02:25.916090 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=3b47349e-6467-4dbe-9c3b-5a06b8f3db7c] 2026-02-05 00:02:26.036209 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=6d841e05-685b-43e9-aae2-4fb01299eb64] 2026-02-05 00:02:26.116825 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=bb78befd-5963-451b-bf2d-b72896d1321d] 2026-02-05 00:02:26.233725 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=d4af560e-a880-402d-8330-8a7f2bfc24f6] 2026-02-05 00:02:26.563388 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=00e69b7c-72cf-4579-a893-ab8e44f997de] 2026-02-05 00:02:30.023839 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 6s [id=e6e40aad-7533-4104-aa34-ceec79f5a0da] 2026-02-05 00:02:30.045577 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-05 00:02:30.057280 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-05 00:02:30.059341 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-05 00:02:30.066465 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-05 00:02:30.074142 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-05 00:02:30.075175 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-05 00:02:30.075227 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-05 00:02:32.863214 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=cefec494-e66f-46f3-ae7c-3817a8814d1b] 2026-02-05 00:02:32.873518 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-05 00:02:32.875844 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-05 00:02:32.877190 | orchestrator | local_file.inventory: Creating... 2026-02-05 00:02:32.880998 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=143adafd39b228ef5223804fb86e2044a9b38df9] 2026-02-05 00:02:32.881052 | orchestrator | local_file.inventory: Creation complete after 0s [id=2b8b3ff04769e398c7152c45ccbfee2cd6f726cf] 2026-02-05 00:02:34.099547 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=cefec494-e66f-46f3-ae7c-3817a8814d1b] 2026-02-05 00:02:40.062423 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-05 00:02:40.063873 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-05 00:02:40.069628 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-05 00:02:40.078909 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-05 00:02:40.078990 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-05 00:02:40.082163 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-05 00:02:50.070400 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-05 00:02:50.070505 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-05 00:02:50.070516 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-05 00:02:50.079777 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-05 00:02:50.079877 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-05 00:02:50.083433 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-05 00:02:50.857258 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=1637eff1-9b6d-4bda-a82a-2fbcdffded36] 2026-02-05 00:02:50.987657 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=2202ad7d-3f7c-4ad5-9c0b-8cd04a453e3f] 2026-02-05 00:03:00.079473 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-05 00:03:00.079619 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-05 00:03:00.080693 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-05 00:03:00.084018 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-05 00:03:01.020531 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=8a12deba-7657-4707-a79d-4c9ea247a1a5] 2026-02-05 00:03:01.229747 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=a9a5e3d0-2b48-450b-b7ec-f03ce23723c8] 2026-02-05 00:03:01.338458 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=ee54640e-21e7-42a9-9439-4f102eaeaf42] 2026-02-05 00:03:01.539898 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 32s [id=48a471bd-ad2b-4c33-8f82-4464621d06c6] 2026-02-05 00:03:01.567275 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-05 00:03:01.570515 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-05 00:03:01.574445 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=1717305440899909435] 2026-02-05 00:03:01.575200 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-05 00:03:01.575771 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-05 00:03:01.577466 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-05 00:03:01.588270 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-05 00:03:01.591064 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-05 00:03:01.594113 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-05 00:03:01.595075 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-05 00:03:01.610704 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-05 00:03:01.612724 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-05 00:03:04.980468 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=48a471bd-ad2b-4c33-8f82-4464621d06c6/e6da1746-b16d-4279-a6c0-a95c954f705d] 2026-02-05 00:03:04.998667 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=ee54640e-21e7-42a9-9439-4f102eaeaf42/a3293e5b-f1f9-462e-9781-4b1b679aef30] 2026-02-05 00:03:05.005340 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=2202ad7d-3f7c-4ad5-9c0b-8cd04a453e3f/7f67b6e9-f99c-4354-902d-31e3a3988722] 2026-02-05 00:03:05.043429 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=ee54640e-21e7-42a9-9439-4f102eaeaf42/b7f472c8-b527-47c9-ac56-62f6f3e84fbf] 2026-02-05 00:03:05.047712 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=48a471bd-ad2b-4c33-8f82-4464621d06c6/0f4e2151-cc71-4085-93f0-18395b8a78d9] 2026-02-05 00:03:05.080924 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=2202ad7d-3f7c-4ad5-9c0b-8cd04a453e3f/33f37d33-b22b-44c3-8624-6074b4bf08c3] 2026-02-05 00:03:11.158416 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=ee54640e-21e7-42a9-9439-4f102eaeaf42/c8222ed3-0da2-4bb4-b170-21b6f36ecb8d] 2026-02-05 00:03:11.186656 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=48a471bd-ad2b-4c33-8f82-4464621d06c6/d601120f-cbb3-4953-a30b-917ccea713c0] 2026-02-05 00:03:11.216210 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 9s [id=2202ad7d-3f7c-4ad5-9c0b-8cd04a453e3f/9acd2af8-1818-4377-bd1d-628102e352cb] 2026-02-05 00:03:11.614108 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-05 00:03:21.622180 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-05 00:03:22.614103 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=2331b0a7-c3ae-457c-a922-78b5924081ae] 2026-02-05 00:03:22.624422 | orchestrator | 2026-02-05 00:03:22.624493 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-05 00:03:22.624504 | orchestrator | 2026-02-05 00:03:22.624510 | orchestrator | Outputs: 2026-02-05 00:03:22.624517 | orchestrator | 2026-02-05 00:03:22.624524 | orchestrator | manager_address = 2026-02-05 00:03:22.624531 | orchestrator | private_key = 2026-02-05 00:03:22.691884 | orchestrator | ok: Runtime: 0:01:11.988272 2026-02-05 00:03:22.711259 | 2026-02-05 00:03:22.711369 | TASK [Fetch manager address] 2026-02-05 00:03:23.153637 | orchestrator | ok 2026-02-05 00:03:23.163188 | 2026-02-05 00:03:23.163298 | TASK [Set manager_host address] 2026-02-05 00:03:23.235520 | orchestrator | ok 2026-02-05 00:03:23.242270 | 2026-02-05 00:03:23.242361 | LOOP [Update ansible collections] 2026-02-05 00:03:24.311757 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-05 00:03:24.312009 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-05 00:03:24.312048 | orchestrator | Starting galaxy collection install process 2026-02-05 00:03:24.312074 | orchestrator | Process install dependency map 2026-02-05 00:03:24.312097 | orchestrator | Starting collection install process 2026-02-05 00:03:24.312119 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-02-05 00:03:24.312143 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-02-05 00:03:24.312173 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-05 00:03:24.312221 | orchestrator | ok: Item: commons Runtime: 0:00:00.709661 2026-02-05 00:03:25.476018 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-05 00:03:25.476125 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-05 00:03:25.476157 | orchestrator | Starting galaxy collection install process 2026-02-05 00:03:25.476181 | orchestrator | Process install dependency map 2026-02-05 00:03:25.476203 | orchestrator | Starting collection install process 2026-02-05 00:03:25.476223 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-02-05 00:03:25.476243 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-02-05 00:03:25.476262 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-05 00:03:25.476293 | orchestrator | ok: Item: services Runtime: 0:00:00.820517 2026-02-05 00:03:25.496622 | 2026-02-05 00:03:25.496804 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-05 00:03:36.049764 | orchestrator | ok 2026-02-05 00:03:36.060071 | 2026-02-05 00:03:36.060202 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-05 00:04:36.112702 | orchestrator | ok 2026-02-05 00:04:36.122258 | 2026-02-05 00:04:36.122370 | TASK [Fetch manager ssh hostkey] 2026-02-05 00:04:37.683428 | orchestrator | Output suppressed because no_log was given 2026-02-05 00:04:37.697274 | 2026-02-05 00:04:37.697421 | TASK [Get ssh keypair from terraform environment] 2026-02-05 00:04:38.233693 | orchestrator | ok: Runtime: 0:00:00.007131 2026-02-05 00:04:38.250208 | 2026-02-05 00:04:38.250337 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-05 00:04:38.298550 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-05 00:04:38.309333 | 2026-02-05 00:04:38.309434 | TASK [Run manager part 0] 2026-02-05 00:04:39.278537 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-05 00:04:39.334322 | orchestrator | 2026-02-05 00:04:39.334359 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-05 00:04:39.334365 | orchestrator | 2026-02-05 00:04:39.334377 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-05 00:04:40.972186 | orchestrator | ok: [testbed-manager] 2026-02-05 00:04:40.972227 | orchestrator | 2026-02-05 00:04:40.972245 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-05 00:04:40.972254 | orchestrator | 2026-02-05 00:04:40.972262 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:04:42.745959 | orchestrator | ok: [testbed-manager] 2026-02-05 00:04:42.746048 | orchestrator | 2026-02-05 00:04:42.746058 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-05 00:04:43.390476 | orchestrator | ok: [testbed-manager] 2026-02-05 00:04:43.390527 | orchestrator | 2026-02-05 00:04:43.390535 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-05 00:04:43.439679 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:04:43.439727 | orchestrator | 2026-02-05 00:04:43.439737 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-05 00:04:43.473654 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:04:43.473709 | orchestrator | 2026-02-05 00:04:43.473720 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-05 00:04:43.505280 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:04:43.505328 | orchestrator | 2026-02-05 00:04:43.505333 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-05 00:04:43.535295 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:04:43.535367 | orchestrator | 2026-02-05 00:04:43.535376 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-05 00:04:43.572234 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:04:43.572292 | orchestrator | 2026-02-05 00:04:43.572300 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-05 00:04:43.610938 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:04:43.611004 | orchestrator | 2026-02-05 00:04:43.611056 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-05 00:04:43.643023 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:04:43.643085 | orchestrator | 2026-02-05 00:04:43.643093 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-05 00:04:44.376213 | orchestrator | changed: [testbed-manager] 2026-02-05 00:04:44.376259 | orchestrator | 2026-02-05 00:04:44.376265 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-05 00:07:09.591207 | orchestrator | changed: [testbed-manager] 2026-02-05 00:07:09.591278 | orchestrator | 2026-02-05 00:07:09.591297 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-05 00:08:37.087186 | orchestrator | changed: [testbed-manager] 2026-02-05 00:08:37.087290 | orchestrator | 2026-02-05 00:08:37.087310 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-05 00:08:59.006129 | orchestrator | changed: [testbed-manager] 2026-02-05 00:08:59.006228 | orchestrator | 2026-02-05 00:08:59.006250 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-05 00:09:08.328024 | orchestrator | changed: [testbed-manager] 2026-02-05 00:09:08.328140 | orchestrator | 2026-02-05 00:09:08.328167 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-05 00:09:08.385810 | orchestrator | ok: [testbed-manager] 2026-02-05 00:09:08.385895 | orchestrator | 2026-02-05 00:09:08.385911 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-05 00:09:09.234084 | orchestrator | ok: [testbed-manager] 2026-02-05 00:09:09.234170 | orchestrator | 2026-02-05 00:09:09.234190 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-05 00:09:09.944657 | orchestrator | changed: [testbed-manager] 2026-02-05 00:09:10.016479 | orchestrator | 2026-02-05 00:09:10.016537 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-05 00:09:16.136044 | orchestrator | changed: [testbed-manager] 2026-02-05 00:09:16.136147 | orchestrator | 2026-02-05 00:09:16.136187 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-05 00:09:21.935696 | orchestrator | changed: [testbed-manager] 2026-02-05 00:09:21.935796 | orchestrator | 2026-02-05 00:09:21.935817 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-05 00:09:24.486445 | orchestrator | changed: [testbed-manager] 2026-02-05 00:09:24.486504 | orchestrator | 2026-02-05 00:09:24.486517 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-05 00:09:26.243015 | orchestrator | changed: [testbed-manager] 2026-02-05 00:09:26.243097 | orchestrator | 2026-02-05 00:09:26.243111 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-05 00:09:27.379407 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-05 00:09:27.379679 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-05 00:09:27.379700 | orchestrator | 2026-02-05 00:09:27.379714 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-05 00:09:27.436740 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-05 00:09:27.436794 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-05 00:09:27.436801 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-05 00:09:27.436808 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-05 00:09:34.406809 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-05 00:09:34.406854 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-05 00:09:34.406862 | orchestrator | 2026-02-05 00:09:34.406869 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-05 00:09:34.991992 | orchestrator | changed: [testbed-manager] 2026-02-05 00:09:34.992089 | orchestrator | 2026-02-05 00:09:34.992105 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-05 00:11:57.427977 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-05 00:11:57.428120 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-05 00:11:57.428143 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-05 00:11:57.428157 | orchestrator | 2026-02-05 00:11:57.428170 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-05 00:11:59.723630 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-05 00:11:59.723723 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-05 00:11:59.723738 | orchestrator | 2026-02-05 00:11:59.723751 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-05 00:11:59.723764 | orchestrator | 2026-02-05 00:11:59.723775 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:12:01.115636 | orchestrator | ok: [testbed-manager] 2026-02-05 00:12:01.115705 | orchestrator | 2026-02-05 00:12:01.115723 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-05 00:12:01.160625 | orchestrator | ok: [testbed-manager] 2026-02-05 00:12:01.160717 | orchestrator | 2026-02-05 00:12:01.160735 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-05 00:12:01.241580 | orchestrator | ok: [testbed-manager] 2026-02-05 00:12:01.241650 | orchestrator | 2026-02-05 00:12:01.241659 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-05 00:12:02.050254 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:02.050367 | orchestrator | 2026-02-05 00:12:02.050391 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-05 00:12:02.772965 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:02.773051 | orchestrator | 2026-02-05 00:12:02.773073 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-05 00:12:04.116864 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-05 00:12:04.116937 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-05 00:12:04.116952 | orchestrator | 2026-02-05 00:12:04.116984 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-05 00:12:05.547971 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:05.548087 | orchestrator | 2026-02-05 00:12:05.548107 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-05 00:12:07.277837 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:12:07.278103 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-05 00:12:07.278125 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:12:07.278137 | orchestrator | 2026-02-05 00:12:07.278151 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-05 00:12:07.338554 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:12:07.440462 | orchestrator | 2026-02-05 00:12:07.440545 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-05 00:12:07.440580 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:12:07.440592 | orchestrator | 2026-02-05 00:12:07.440606 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-05 00:12:07.994585 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:07.994679 | orchestrator | 2026-02-05 00:12:07.994696 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-05 00:12:08.063554 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:12:08.063601 | orchestrator | 2026-02-05 00:12:08.063609 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-05 00:12:08.931476 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 00:12:08.931562 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:08.931578 | orchestrator | 2026-02-05 00:12:08.931590 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-05 00:12:08.967135 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:12:08.967224 | orchestrator | 2026-02-05 00:12:08.967239 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-05 00:12:09.006095 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:12:09.006169 | orchestrator | 2026-02-05 00:12:09.006183 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-05 00:12:09.046996 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:12:09.047064 | orchestrator | 2026-02-05 00:12:09.047078 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-05 00:12:09.112529 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:12:09.112615 | orchestrator | 2026-02-05 00:12:09.112630 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-05 00:12:09.848707 | orchestrator | ok: [testbed-manager] 2026-02-05 00:12:09.848811 | orchestrator | 2026-02-05 00:12:09.848837 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-05 00:12:09.848859 | orchestrator | 2026-02-05 00:12:09.848879 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:12:11.229758 | orchestrator | ok: [testbed-manager] 2026-02-05 00:12:11.229806 | orchestrator | 2026-02-05 00:12:11.229811 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-05 00:12:12.162049 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:12.162090 | orchestrator | 2026-02-05 00:12:12.162096 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:12:12.162102 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-05 00:12:12.162106 | orchestrator | 2026-02-05 00:12:12.623583 | orchestrator | ok: Runtime: 0:07:33.659503 2026-02-05 00:12:12.643984 | 2026-02-05 00:12:12.644125 | TASK [Point out that the log in on the manager is now possible] 2026-02-05 00:12:12.690474 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-05 00:12:12.700419 | 2026-02-05 00:12:12.700558 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-05 00:12:12.737756 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-05 00:12:12.746578 | 2026-02-05 00:12:12.746698 | TASK [Run manager part 1 + 2] 2026-02-05 00:12:14.245049 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-05 00:12:14.322350 | orchestrator | 2026-02-05 00:12:14.322412 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-05 00:12:14.322421 | orchestrator | 2026-02-05 00:12:14.322436 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:12:17.225303 | orchestrator | ok: [testbed-manager] 2026-02-05 00:12:17.225353 | orchestrator | 2026-02-05 00:12:17.225375 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-05 00:12:17.263617 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:12:17.263663 | orchestrator | 2026-02-05 00:12:17.263672 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-05 00:12:17.317127 | orchestrator | ok: [testbed-manager] 2026-02-05 00:12:17.317192 | orchestrator | 2026-02-05 00:12:17.317332 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-05 00:12:17.361321 | orchestrator | ok: [testbed-manager] 2026-02-05 00:12:17.361367 | orchestrator | 2026-02-05 00:12:17.361374 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-05 00:12:17.433416 | orchestrator | ok: [testbed-manager] 2026-02-05 00:12:17.433550 | orchestrator | 2026-02-05 00:12:17.433560 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-05 00:12:17.505873 | orchestrator | ok: [testbed-manager] 2026-02-05 00:12:17.505936 | orchestrator | 2026-02-05 00:12:17.505946 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-05 00:12:17.568751 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-05 00:12:17.568812 | orchestrator | 2026-02-05 00:12:17.568819 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-05 00:12:18.308612 | orchestrator | ok: [testbed-manager] 2026-02-05 00:12:18.308707 | orchestrator | 2026-02-05 00:12:18.308728 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-05 00:12:18.353364 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:12:18.353418 | orchestrator | 2026-02-05 00:12:18.353425 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-05 00:12:19.653246 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:19.653330 | orchestrator | 2026-02-05 00:12:19.653347 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-05 00:12:20.198679 | orchestrator | ok: [testbed-manager] 2026-02-05 00:12:20.198769 | orchestrator | 2026-02-05 00:12:20.198785 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-05 00:12:21.305913 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:21.305990 | orchestrator | 2026-02-05 00:12:21.306005 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-05 00:12:37.550784 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:37.550881 | orchestrator | 2026-02-05 00:12:37.550898 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-05 00:12:38.223542 | orchestrator | ok: [testbed-manager] 2026-02-05 00:12:38.223645 | orchestrator | 2026-02-05 00:12:38.223671 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-05 00:12:38.280053 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:12:38.280097 | orchestrator | 2026-02-05 00:12:38.280106 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-05 00:12:39.237421 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:39.237499 | orchestrator | 2026-02-05 00:12:39.237512 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-05 00:12:40.206773 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:40.206815 | orchestrator | 2026-02-05 00:12:40.206823 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-05 00:12:40.777157 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:40.777245 | orchestrator | 2026-02-05 00:12:40.777287 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-05 00:12:40.822893 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-05 00:12:40.823006 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-05 00:12:40.823022 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-05 00:12:40.823034 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-05 00:12:43.386681 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:43.386724 | orchestrator | 2026-02-05 00:12:43.386733 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-05 00:12:52.081101 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-05 00:12:52.081208 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-05 00:12:52.081228 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-05 00:12:52.081241 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-05 00:12:52.081313 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-05 00:12:52.081327 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-05 00:12:52.081339 | orchestrator | 2026-02-05 00:12:52.081352 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-05 00:12:53.101991 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:53.102899 | orchestrator | 2026-02-05 00:12:53.102963 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-05 00:12:53.147850 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:12:53.147917 | orchestrator | 2026-02-05 00:12:53.147928 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-05 00:12:56.321460 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:56.321556 | orchestrator | 2026-02-05 00:12:56.321574 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-05 00:12:56.357441 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:12:56.357500 | orchestrator | 2026-02-05 00:12:56.357508 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-05 00:14:30.360550 | orchestrator | changed: [testbed-manager] 2026-02-05 00:14:30.360621 | orchestrator | 2026-02-05 00:14:30.360647 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-05 00:14:31.473223 | orchestrator | ok: [testbed-manager] 2026-02-05 00:14:31.473520 | orchestrator | 2026-02-05 00:14:31.473538 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:14:31.473544 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-05 00:14:31.473550 | orchestrator | 2026-02-05 00:14:31.866419 | orchestrator | ok: Runtime: 0:02:18.539356 2026-02-05 00:14:31.883478 | 2026-02-05 00:14:31.883617 | TASK [Reboot manager] 2026-02-05 00:14:33.420680 | orchestrator | ok: Runtime: 0:00:00.977122 2026-02-05 00:14:33.437795 | 2026-02-05 00:14:33.438008 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-05 00:14:48.399001 | orchestrator | ok 2026-02-05 00:14:48.408460 | 2026-02-05 00:14:48.408583 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-05 00:15:48.447369 | orchestrator | ok 2026-02-05 00:15:48.460025 | 2026-02-05 00:15:48.460160 | TASK [Deploy manager + bootstrap nodes] 2026-02-05 00:15:51.035811 | orchestrator | 2026-02-05 00:15:51.036043 | orchestrator | # DEPLOY MANAGER 2026-02-05 00:15:51.036070 | orchestrator | 2026-02-05 00:15:51.036085 | orchestrator | + set -e 2026-02-05 00:15:51.036099 | orchestrator | + echo 2026-02-05 00:15:51.036113 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-05 00:15:51.036132 | orchestrator | + echo 2026-02-05 00:15:51.036223 | orchestrator | + cat /opt/manager-vars.sh 2026-02-05 00:15:51.039522 | orchestrator | export NUMBER_OF_NODES=6 2026-02-05 00:15:51.039573 | orchestrator | 2026-02-05 00:15:51.039587 | orchestrator | export CEPH_VERSION=reef 2026-02-05 00:15:51.039603 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-05 00:15:51.039618 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-05 00:15:51.039645 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-05 00:15:51.039659 | orchestrator | 2026-02-05 00:15:51.039714 | orchestrator | export ARA=false 2026-02-05 00:15:51.039730 | orchestrator | export DEPLOY_MODE=manager 2026-02-05 00:15:51.039750 | orchestrator | export TEMPEST=true 2026-02-05 00:15:51.039763 | orchestrator | export IS_ZUUL=true 2026-02-05 00:15:51.039776 | orchestrator | 2026-02-05 00:15:51.039797 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.243 2026-02-05 00:15:51.039812 | orchestrator | export EXTERNAL_API=false 2026-02-05 00:15:51.039846 | orchestrator | 2026-02-05 00:15:51.039858 | orchestrator | export IMAGE_USER=ubuntu 2026-02-05 00:15:51.039872 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-05 00:15:51.039883 | orchestrator | 2026-02-05 00:15:51.039893 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-05 00:15:51.039913 | orchestrator | 2026-02-05 00:15:51.039924 | orchestrator | + echo 2026-02-05 00:15:51.039937 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 00:15:51.040763 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 00:15:51.040790 | orchestrator | ++ INTERACTIVE=false 2026-02-05 00:15:51.040803 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 00:15:51.040817 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 00:15:51.041060 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 00:15:51.041082 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 00:15:51.041120 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 00:15:51.041132 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 00:15:51.041175 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 00:15:51.041188 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 00:15:51.041199 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 00:15:51.041210 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 00:15:51.041248 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 00:15:51.041259 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 00:15:51.041281 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 00:15:51.041292 | orchestrator | ++ export ARA=false 2026-02-05 00:15:51.041304 | orchestrator | ++ ARA=false 2026-02-05 00:15:51.041315 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 00:15:51.041325 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 00:15:51.041336 | orchestrator | ++ export TEMPEST=true 2026-02-05 00:15:51.041347 | orchestrator | ++ TEMPEST=true 2026-02-05 00:15:51.041358 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 00:15:51.041369 | orchestrator | ++ IS_ZUUL=true 2026-02-05 00:15:51.041380 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.243 2026-02-05 00:15:51.041391 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.243 2026-02-05 00:15:51.041402 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 00:15:51.041413 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 00:15:51.041424 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 00:15:51.041435 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 00:15:51.041446 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 00:15:51.041457 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 00:15:51.041473 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 00:15:51.041484 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 00:15:51.041496 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-05 00:15:51.099227 | orchestrator | + docker version 2026-02-05 00:15:51.208757 | orchestrator | Client: Docker Engine - Community 2026-02-05 00:15:51.208884 | orchestrator | Version: 27.5.1 2026-02-05 00:15:51.208907 | orchestrator | API version: 1.47 2026-02-05 00:15:51.208937 | orchestrator | Go version: go1.22.11 2026-02-05 00:15:51.208948 | orchestrator | Git commit: 9f9e405 2026-02-05 00:15:51.208960 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-05 00:15:51.208972 | orchestrator | OS/Arch: linux/amd64 2026-02-05 00:15:51.208983 | orchestrator | Context: default 2026-02-05 00:15:51.208995 | orchestrator | 2026-02-05 00:15:51.209006 | orchestrator | Server: Docker Engine - Community 2026-02-05 00:15:51.209017 | orchestrator | Engine: 2026-02-05 00:15:51.209048 | orchestrator | Version: 27.5.1 2026-02-05 00:15:51.209062 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-05 00:15:51.209102 | orchestrator | Go version: go1.22.11 2026-02-05 00:15:51.209114 | orchestrator | Git commit: 4c9b3b0 2026-02-05 00:15:51.209125 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-05 00:15:51.209136 | orchestrator | OS/Arch: linux/amd64 2026-02-05 00:15:51.209193 | orchestrator | Experimental: false 2026-02-05 00:15:51.209207 | orchestrator | containerd: 2026-02-05 00:15:51.209275 | orchestrator | Version: v2.2.1 2026-02-05 00:15:51.209290 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-05 00:15:51.209302 | orchestrator | runc: 2026-02-05 00:15:51.209313 | orchestrator | Version: 1.3.4 2026-02-05 00:15:51.209325 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-05 00:15:51.209336 | orchestrator | docker-init: 2026-02-05 00:15:51.209347 | orchestrator | Version: 0.19.0 2026-02-05 00:15:51.209359 | orchestrator | GitCommit: de40ad0 2026-02-05 00:15:51.211997 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-05 00:15:51.221037 | orchestrator | + set -e 2026-02-05 00:15:51.221107 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 00:15:51.221121 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 00:15:51.221134 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 00:15:51.221192 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 00:15:51.221207 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 00:15:51.221218 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 00:15:51.221230 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 00:15:51.221241 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 00:15:51.221252 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 00:15:51.221263 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 00:15:51.221274 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 00:15:51.221284 | orchestrator | ++ export ARA=false 2026-02-05 00:15:51.221295 | orchestrator | ++ ARA=false 2026-02-05 00:15:51.221306 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 00:15:51.221317 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 00:15:51.221328 | orchestrator | ++ export TEMPEST=true 2026-02-05 00:15:51.221339 | orchestrator | ++ TEMPEST=true 2026-02-05 00:15:51.221359 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 00:15:51.221371 | orchestrator | ++ IS_ZUUL=true 2026-02-05 00:15:51.221382 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.243 2026-02-05 00:15:51.221393 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.243 2026-02-05 00:15:51.221404 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 00:15:51.221415 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 00:15:51.221425 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 00:15:51.221436 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 00:15:51.221447 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 00:15:51.221457 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 00:15:51.221469 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 00:15:51.221479 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 00:15:51.221490 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 00:15:51.221501 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 00:15:51.221512 | orchestrator | ++ INTERACTIVE=false 2026-02-05 00:15:51.221522 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 00:15:51.221537 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 00:15:51.221549 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-05 00:15:51.221560 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-05 00:15:51.227995 | orchestrator | + set -e 2026-02-05 00:15:51.228027 | orchestrator | + VERSION=9.5.0 2026-02-05 00:15:51.228043 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-05 00:15:51.237760 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-05 00:15:51.237846 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-05 00:15:51.242832 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-05 00:15:51.248896 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-05 00:15:51.257707 | orchestrator | /opt/configuration ~ 2026-02-05 00:15:51.257786 | orchestrator | + set -e 2026-02-05 00:15:51.257801 | orchestrator | + pushd /opt/configuration 2026-02-05 00:15:51.257814 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 00:15:51.259559 | orchestrator | + source /opt/venv/bin/activate 2026-02-05 00:15:51.260935 | orchestrator | ++ deactivate nondestructive 2026-02-05 00:15:51.260962 | orchestrator | ++ '[' -n '' ']' 2026-02-05 00:15:51.260991 | orchestrator | ++ '[' -n '' ']' 2026-02-05 00:15:51.261039 | orchestrator | ++ hash -r 2026-02-05 00:15:51.261051 | orchestrator | ++ '[' -n '' ']' 2026-02-05 00:15:51.261098 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-05 00:15:51.261109 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-05 00:15:51.261120 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-05 00:15:51.261140 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-05 00:15:51.261191 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-05 00:15:51.261213 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-05 00:15:51.261224 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-05 00:15:51.261237 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 00:15:51.261249 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 00:15:51.261260 | orchestrator | ++ export PATH 2026-02-05 00:15:51.261272 | orchestrator | ++ '[' -n '' ']' 2026-02-05 00:15:51.261283 | orchestrator | ++ '[' -z '' ']' 2026-02-05 00:15:51.261293 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-05 00:15:51.261304 | orchestrator | ++ PS1='(venv) ' 2026-02-05 00:15:51.261315 | orchestrator | ++ export PS1 2026-02-05 00:15:51.261326 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-05 00:15:51.261337 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-05 00:15:51.261348 | orchestrator | ++ hash -r 2026-02-05 00:15:51.261363 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-05 00:15:52.294846 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-05 00:15:52.295685 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-05 00:15:52.297075 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-05 00:15:52.298668 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-05 00:15:52.299655 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-05 00:15:52.309758 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-05 00:15:52.311007 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-05 00:15:52.312122 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-05 00:15:52.313560 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-05 00:15:52.346518 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-05 00:15:52.348059 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-05 00:15:52.349660 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-05 00:15:52.350929 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-05 00:15:52.355106 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-05 00:15:52.576098 | orchestrator | ++ which gilt 2026-02-05 00:15:52.578858 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-05 00:15:52.578911 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-05 00:15:52.796123 | orchestrator | osism.cfg-generics: 2026-02-05 00:15:52.926619 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-05 00:15:52.926731 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-05 00:15:52.927060 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-05 00:15:52.927108 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-05 00:15:53.665433 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-05 00:15:53.677103 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-05 00:15:53.975070 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-05 00:15:54.024333 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 00:15:54.024434 | orchestrator | + deactivate 2026-02-05 00:15:54.024452 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-05 00:15:54.024468 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 00:15:54.024482 | orchestrator | + export PATH 2026-02-05 00:15:54.024495 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-05 00:15:54.024509 | orchestrator | + '[' -n '' ']' 2026-02-05 00:15:54.024525 | orchestrator | + hash -r 2026-02-05 00:15:54.024538 | orchestrator | + '[' -n '' ']' 2026-02-05 00:15:54.024551 | orchestrator | + unset VIRTUAL_ENV 2026-02-05 00:15:54.024563 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-05 00:15:54.024577 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-05 00:15:54.024590 | orchestrator | + unset -f deactivate 2026-02-05 00:15:54.024613 | orchestrator | ~ 2026-02-05 00:15:54.024625 | orchestrator | + popd 2026-02-05 00:15:54.026273 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-05 00:15:54.026306 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-05 00:15:54.026968 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-05 00:15:54.077745 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 00:15:54.077873 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-05 00:15:54.078872 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-05 00:15:54.130708 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 00:15:54.131439 | orchestrator | ++ semver 2024.2 2025.1 2026-02-05 00:15:54.181415 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 00:15:54.181493 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-05 00:15:54.267540 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 00:15:54.267631 | orchestrator | + source /opt/venv/bin/activate 2026-02-05 00:15:54.267647 | orchestrator | ++ deactivate nondestructive 2026-02-05 00:15:54.267697 | orchestrator | ++ '[' -n '' ']' 2026-02-05 00:15:54.267748 | orchestrator | ++ '[' -n '' ']' 2026-02-05 00:15:54.267770 | orchestrator | ++ hash -r 2026-02-05 00:15:54.267782 | orchestrator | ++ '[' -n '' ']' 2026-02-05 00:15:54.267793 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-05 00:15:54.267813 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-05 00:15:54.267825 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-05 00:15:54.268237 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-05 00:15:54.268264 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-05 00:15:54.268277 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-05 00:15:54.268288 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-05 00:15:54.268301 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 00:15:54.268335 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 00:15:54.268347 | orchestrator | ++ export PATH 2026-02-05 00:15:54.268358 | orchestrator | ++ '[' -n '' ']' 2026-02-05 00:15:54.268369 | orchestrator | ++ '[' -z '' ']' 2026-02-05 00:15:54.268379 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-05 00:15:54.268390 | orchestrator | ++ PS1='(venv) ' 2026-02-05 00:15:54.268401 | orchestrator | ++ export PS1 2026-02-05 00:15:54.268412 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-05 00:15:54.268431 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-05 00:15:54.268448 | orchestrator | ++ hash -r 2026-02-05 00:15:54.268460 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-05 00:15:55.354330 | orchestrator | 2026-02-05 00:15:55.354414 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-05 00:15:55.354427 | orchestrator | 2026-02-05 00:15:55.354437 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-05 00:15:55.907813 | orchestrator | ok: [testbed-manager] 2026-02-05 00:15:55.907921 | orchestrator | 2026-02-05 00:15:55.907937 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-05 00:15:56.876198 | orchestrator | changed: [testbed-manager] 2026-02-05 00:15:56.876292 | orchestrator | 2026-02-05 00:15:56.876318 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-05 00:15:56.876384 | orchestrator | 2026-02-05 00:15:56.876401 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:15:59.046257 | orchestrator | ok: [testbed-manager] 2026-02-05 00:15:59.046317 | orchestrator | 2026-02-05 00:15:59.046324 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-05 00:15:59.090973 | orchestrator | ok: [testbed-manager] 2026-02-05 00:15:59.091029 | orchestrator | 2026-02-05 00:15:59.091035 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-05 00:15:59.530907 | orchestrator | changed: [testbed-manager] 2026-02-05 00:15:59.531010 | orchestrator | 2026-02-05 00:15:59.531030 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-05 00:15:59.574810 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:15:59.574912 | orchestrator | 2026-02-05 00:15:59.574929 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-05 00:15:59.922293 | orchestrator | changed: [testbed-manager] 2026-02-05 00:15:59.922398 | orchestrator | 2026-02-05 00:15:59.922418 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-05 00:16:00.251979 | orchestrator | ok: [testbed-manager] 2026-02-05 00:16:00.252028 | orchestrator | 2026-02-05 00:16:00.252035 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-05 00:16:00.358425 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:16:00.358506 | orchestrator | 2026-02-05 00:16:00.358517 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-05 00:16:00.358526 | orchestrator | 2026-02-05 00:16:00.358534 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:16:02.058246 | orchestrator | ok: [testbed-manager] 2026-02-05 00:16:02.058316 | orchestrator | 2026-02-05 00:16:02.058326 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-05 00:16:02.148605 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-05 00:16:02.148688 | orchestrator | 2026-02-05 00:16:02.148694 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-05 00:16:02.213412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-05 00:16:02.213469 | orchestrator | 2026-02-05 00:16:02.213475 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-05 00:16:03.279259 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-05 00:16:03.279518 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-05 00:16:03.279549 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-05 00:16:03.279562 | orchestrator | 2026-02-05 00:16:03.279579 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-05 00:16:05.035580 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-05 00:16:05.035702 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-05 00:16:05.035742 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-05 00:16:05.035755 | orchestrator | 2026-02-05 00:16:05.035767 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-05 00:16:05.661228 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 00:16:05.661342 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:05.661369 | orchestrator | 2026-02-05 00:16:05.661389 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-05 00:16:06.258689 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 00:16:06.258831 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:06.258867 | orchestrator | 2026-02-05 00:16:06.258893 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-05 00:16:06.316855 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:16:06.316947 | orchestrator | 2026-02-05 00:16:06.316964 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-05 00:16:06.663161 | orchestrator | ok: [testbed-manager] 2026-02-05 00:16:06.663249 | orchestrator | 2026-02-05 00:16:06.663287 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-05 00:16:06.724247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-05 00:16:06.724357 | orchestrator | 2026-02-05 00:16:06.724373 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-05 00:16:07.816035 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:07.816122 | orchestrator | 2026-02-05 00:16:07.816160 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-05 00:16:08.623752 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:08.625055 | orchestrator | 2026-02-05 00:16:08.625096 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-05 00:16:27.444979 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:27.445182 | orchestrator | 2026-02-05 00:16:27.445205 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-05 00:16:27.494255 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:16:27.494376 | orchestrator | 2026-02-05 00:16:27.494489 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-05 00:16:27.494507 | orchestrator | 2026-02-05 00:16:27.494519 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:16:29.205344 | orchestrator | ok: [testbed-manager] 2026-02-05 00:16:29.205437 | orchestrator | 2026-02-05 00:16:29.205454 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-05 00:16:29.316389 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-05 00:16:29.316507 | orchestrator | 2026-02-05 00:16:29.316533 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-05 00:16:29.371089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 00:16:29.371204 | orchestrator | 2026-02-05 00:16:29.371224 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-05 00:16:31.730653 | orchestrator | ok: [testbed-manager] 2026-02-05 00:16:31.731079 | orchestrator | 2026-02-05 00:16:31.731132 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-05 00:16:31.775887 | orchestrator | ok: [testbed-manager] 2026-02-05 00:16:31.775992 | orchestrator | 2026-02-05 00:16:31.776017 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-05 00:16:31.902217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-05 00:16:31.902301 | orchestrator | 2026-02-05 00:16:31.902316 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-05 00:16:34.672654 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-05 00:16:34.672749 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-05 00:16:34.672764 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-05 00:16:34.672777 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-05 00:16:34.672788 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-05 00:16:34.672799 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-05 00:16:34.672810 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-05 00:16:34.672821 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-05 00:16:34.672832 | orchestrator | 2026-02-05 00:16:34.672844 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-05 00:16:35.290235 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:35.290328 | orchestrator | 2026-02-05 00:16:35.290347 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-05 00:16:35.914613 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:35.914751 | orchestrator | 2026-02-05 00:16:35.914770 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-05 00:16:35.989593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-05 00:16:35.989679 | orchestrator | 2026-02-05 00:16:35.989694 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-05 00:16:37.199412 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-05 00:16:37.200463 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-05 00:16:37.200531 | orchestrator | 2026-02-05 00:16:37.200547 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-05 00:16:37.817788 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:37.817879 | orchestrator | 2026-02-05 00:16:37.817898 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-05 00:16:37.868985 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:16:37.869070 | orchestrator | 2026-02-05 00:16:37.869083 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-05 00:16:37.945786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-05 00:16:37.945899 | orchestrator | 2026-02-05 00:16:37.945922 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-05 00:16:38.561994 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:38.562180 | orchestrator | 2026-02-05 00:16:38.562200 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-05 00:16:38.623587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-05 00:16:38.623678 | orchestrator | 2026-02-05 00:16:38.623693 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-05 00:16:39.973482 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 00:16:39.973574 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 00:16:39.973586 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:39.973594 | orchestrator | 2026-02-05 00:16:39.973601 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-05 00:16:40.575670 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:40.575735 | orchestrator | 2026-02-05 00:16:40.575741 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-05 00:16:40.625049 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:16:40.625128 | orchestrator | 2026-02-05 00:16:40.625134 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-05 00:16:40.727267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-05 00:16:40.727334 | orchestrator | 2026-02-05 00:16:40.727341 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-05 00:16:41.237546 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:41.237710 | orchestrator | 2026-02-05 00:16:41.237742 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-05 00:16:41.636623 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:41.636717 | orchestrator | 2026-02-05 00:16:41.636733 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-05 00:16:42.825800 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-05 00:16:42.825909 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-05 00:16:42.825932 | orchestrator | 2026-02-05 00:16:42.825953 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-05 00:16:43.462697 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:43.462814 | orchestrator | 2026-02-05 00:16:43.462839 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-05 00:16:43.839477 | orchestrator | ok: [testbed-manager] 2026-02-05 00:16:43.839569 | orchestrator | 2026-02-05 00:16:43.839584 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-05 00:16:44.191240 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:44.191313 | orchestrator | 2026-02-05 00:16:44.191323 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-05 00:16:44.232889 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:16:44.232973 | orchestrator | 2026-02-05 00:16:44.232986 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-05 00:16:44.298709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-05 00:16:44.298818 | orchestrator | 2026-02-05 00:16:44.298830 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-05 00:16:44.340444 | orchestrator | ok: [testbed-manager] 2026-02-05 00:16:44.340557 | orchestrator | 2026-02-05 00:16:44.340585 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-05 00:16:46.310412 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-05 00:16:46.310517 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-05 00:16:46.310533 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-05 00:16:46.310545 | orchestrator | 2026-02-05 00:16:46.310558 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-05 00:16:47.015847 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:47.015942 | orchestrator | 2026-02-05 00:16:47.015959 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-05 00:16:47.690810 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:47.690906 | orchestrator | 2026-02-05 00:16:47.690922 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-05 00:16:48.394679 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:48.394762 | orchestrator | 2026-02-05 00:16:48.394771 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-05 00:16:48.472577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-05 00:16:48.472689 | orchestrator | 2026-02-05 00:16:48.472709 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-05 00:16:48.513506 | orchestrator | ok: [testbed-manager] 2026-02-05 00:16:48.513601 | orchestrator | 2026-02-05 00:16:48.513617 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-05 00:16:49.185912 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-05 00:16:49.186006 | orchestrator | 2026-02-05 00:16:49.186068 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-05 00:16:49.275188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-05 00:16:49.275274 | orchestrator | 2026-02-05 00:16:49.275289 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-05 00:16:49.975541 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:49.975651 | orchestrator | 2026-02-05 00:16:49.975676 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-05 00:16:50.544906 | orchestrator | ok: [testbed-manager] 2026-02-05 00:16:50.544981 | orchestrator | 2026-02-05 00:16:50.544992 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-05 00:16:50.590276 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:16:50.590363 | orchestrator | 2026-02-05 00:16:50.590377 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-05 00:16:50.643522 | orchestrator | ok: [testbed-manager] 2026-02-05 00:16:50.643621 | orchestrator | 2026-02-05 00:16:50.643637 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-05 00:16:51.452690 | orchestrator | changed: [testbed-manager] 2026-02-05 00:16:51.452765 | orchestrator | 2026-02-05 00:16:51.452776 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-05 00:17:58.331115 | orchestrator | changed: [testbed-manager] 2026-02-05 00:17:58.331208 | orchestrator | 2026-02-05 00:17:58.331219 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-05 00:17:59.276235 | orchestrator | ok: [testbed-manager] 2026-02-05 00:17:59.276357 | orchestrator | 2026-02-05 00:17:59.276383 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-05 00:17:59.334647 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:17:59.334736 | orchestrator | 2026-02-05 00:17:59.334750 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-05 00:18:02.042570 | orchestrator | changed: [testbed-manager] 2026-02-05 00:18:02.042692 | orchestrator | 2026-02-05 00:18:02.042717 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-05 00:18:02.104854 | orchestrator | ok: [testbed-manager] 2026-02-05 00:18:02.104942 | orchestrator | 2026-02-05 00:18:02.104957 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-05 00:18:02.104969 | orchestrator | 2026-02-05 00:18:02.104981 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-05 00:18:02.239437 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:18:02.239532 | orchestrator | 2026-02-05 00:18:02.239549 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-05 00:19:02.304113 | orchestrator | Pausing for 60 seconds 2026-02-05 00:19:02.304241 | orchestrator | changed: [testbed-manager] 2026-02-05 00:19:02.304267 | orchestrator | 2026-02-05 00:19:02.304290 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-05 00:19:05.387754 | orchestrator | changed: [testbed-manager] 2026-02-05 00:19:05.387862 | orchestrator | 2026-02-05 00:19:05.387879 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-05 00:19:46.817488 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-05 00:19:46.817606 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-05 00:19:46.817622 | orchestrator | changed: [testbed-manager] 2026-02-05 00:19:46.817636 | orchestrator | 2026-02-05 00:19:46.817668 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-05 00:19:56.708903 | orchestrator | changed: [testbed-manager] 2026-02-05 00:19:56.709060 | orchestrator | 2026-02-05 00:19:56.709089 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-05 00:19:56.799106 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-05 00:19:56.799212 | orchestrator | 2026-02-05 00:19:56.799230 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-05 00:19:56.799242 | orchestrator | 2026-02-05 00:19:56.799254 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-05 00:19:56.856597 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:19:56.856697 | orchestrator | 2026-02-05 00:19:56.856715 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-05 00:19:56.933220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-05 00:19:56.933330 | orchestrator | 2026-02-05 00:19:56.933351 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-05 00:19:57.690388 | orchestrator | changed: [testbed-manager] 2026-02-05 00:19:57.690488 | orchestrator | 2026-02-05 00:19:57.690506 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-05 00:20:00.731479 | orchestrator | ok: [testbed-manager] 2026-02-05 00:20:00.731584 | orchestrator | 2026-02-05 00:20:00.731600 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-05 00:20:00.805612 | orchestrator | ok: [testbed-manager] => { 2026-02-05 00:20:00.805733 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-05 00:20:00.805763 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-05 00:20:00.805783 | orchestrator | "Checking running containers against expected versions...", 2026-02-05 00:20:00.805804 | orchestrator | "", 2026-02-05 00:20:00.805823 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-05 00:20:00.805842 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-05 00:20:00.805862 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.805882 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-05 00:20:00.805902 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.805922 | orchestrator | "", 2026-02-05 00:20:00.805940 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-05 00:20:00.805959 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-05 00:20:00.806012 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.806137 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-05 00:20:00.806163 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.806183 | orchestrator | "", 2026-02-05 00:20:00.806202 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-05 00:20:00.806222 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-05 00:20:00.806240 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.806262 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-05 00:20:00.806285 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.806304 | orchestrator | "", 2026-02-05 00:20:00.806323 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-05 00:20:00.806344 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-05 00:20:00.806366 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.806386 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-05 00:20:00.806400 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.806413 | orchestrator | "", 2026-02-05 00:20:00.806426 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-05 00:20:00.806442 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-05 00:20:00.806455 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.806468 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-05 00:20:00.806479 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.806490 | orchestrator | "", 2026-02-05 00:20:00.806501 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-05 00:20:00.806512 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 00:20:00.806522 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.806533 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 00:20:00.806543 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.806554 | orchestrator | "", 2026-02-05 00:20:00.806565 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-05 00:20:00.806576 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-05 00:20:00.806587 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.806598 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-05 00:20:00.806609 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.806620 | orchestrator | "", 2026-02-05 00:20:00.806630 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-05 00:20:00.806641 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-05 00:20:00.806652 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.806662 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-05 00:20:00.806673 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.806684 | orchestrator | "", 2026-02-05 00:20:00.806694 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-05 00:20:00.806705 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-05 00:20:00.806716 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.806727 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-05 00:20:00.806737 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.806748 | orchestrator | "", 2026-02-05 00:20:00.806758 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-05 00:20:00.806769 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-05 00:20:00.806780 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.806790 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-05 00:20:00.806801 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.806818 | orchestrator | "", 2026-02-05 00:20:00.806835 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-05 00:20:00.806851 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 00:20:00.806867 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.806902 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 00:20:00.806921 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.806939 | orchestrator | "", 2026-02-05 00:20:00.806954 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-05 00:20:00.806992 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 00:20:00.807005 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.807016 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 00:20:00.807026 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.807037 | orchestrator | "", 2026-02-05 00:20:00.807049 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-05 00:20:00.807060 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 00:20:00.807070 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.807081 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 00:20:00.807106 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.807118 | orchestrator | "", 2026-02-05 00:20:00.807128 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-05 00:20:00.807139 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 00:20:00.807149 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.807160 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 00:20:00.807204 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.807224 | orchestrator | "", 2026-02-05 00:20:00.807243 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-05 00:20:00.807263 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 00:20:00.807282 | orchestrator | " Enabled: true", 2026-02-05 00:20:00.807309 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 00:20:00.807320 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:20:00.807331 | orchestrator | "", 2026-02-05 00:20:00.807342 | orchestrator | "=== Summary ===", 2026-02-05 00:20:00.807352 | orchestrator | "Errors (version mismatches): 0", 2026-02-05 00:20:00.807363 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-05 00:20:00.807374 | orchestrator | "", 2026-02-05 00:20:00.807385 | orchestrator | "✅ All running containers match expected versions!" 2026-02-05 00:20:00.807396 | orchestrator | ] 2026-02-05 00:20:00.807407 | orchestrator | } 2026-02-05 00:20:00.807417 | orchestrator | 2026-02-05 00:20:00.807429 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-05 00:20:00.864267 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:20:00.864360 | orchestrator | 2026-02-05 00:20:00.864375 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:20:00.864389 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-05 00:20:00.864401 | orchestrator | 2026-02-05 00:20:00.938814 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 00:20:00.938914 | orchestrator | + deactivate 2026-02-05 00:20:00.938930 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-05 00:20:00.938943 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 00:20:00.938955 | orchestrator | + export PATH 2026-02-05 00:20:00.939015 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-05 00:20:00.939029 | orchestrator | + '[' -n '' ']' 2026-02-05 00:20:00.939041 | orchestrator | + hash -r 2026-02-05 00:20:00.939051 | orchestrator | + '[' -n '' ']' 2026-02-05 00:20:00.939062 | orchestrator | + unset VIRTUAL_ENV 2026-02-05 00:20:00.939073 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-05 00:20:00.939095 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-05 00:20:00.939107 | orchestrator | + unset -f deactivate 2026-02-05 00:20:00.939119 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-05 00:20:00.946753 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-05 00:20:00.946793 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-05 00:20:00.946805 | orchestrator | + local max_attempts=60 2026-02-05 00:20:00.946816 | orchestrator | + local name=ceph-ansible 2026-02-05 00:20:00.946856 | orchestrator | + local attempt_num=1 2026-02-05 00:20:00.947917 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:20:00.973706 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:20:00.973787 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-05 00:20:00.973800 | orchestrator | + local max_attempts=60 2026-02-05 00:20:00.973813 | orchestrator | + local name=kolla-ansible 2026-02-05 00:20:00.973824 | orchestrator | + local attempt_num=1 2026-02-05 00:20:00.974149 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-05 00:20:01.004713 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:20:01.004807 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-05 00:20:01.004822 | orchestrator | + local max_attempts=60 2026-02-05 00:20:01.004834 | orchestrator | + local name=osism-ansible 2026-02-05 00:20:01.004846 | orchestrator | + local attempt_num=1 2026-02-05 00:20:01.005168 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-05 00:20:01.034686 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:20:01.034764 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-05 00:20:01.034778 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-05 00:20:01.696786 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-05 00:20:01.868765 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-05 00:20:01.868876 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-02-05 00:20:01.868887 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-02-05 00:20:01.868896 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-02-05 00:20:01.868906 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-02-05 00:20:01.868932 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-02-05 00:20:01.868940 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-02-05 00:20:01.868949 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-02-05 00:20:01.868957 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-02-05 00:20:01.868980 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-02-05 00:20:01.868989 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-02-05 00:20:01.868997 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-02-05 00:20:01.869006 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-02-05 00:20:01.869111 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-02-05 00:20:01.869121 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-02-05 00:20:01.869129 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-02-05 00:20:01.873219 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-05 00:20:01.906091 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 00:20:01.906172 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-05 00:20:01.908679 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-05 00:20:14.050785 | orchestrator | 2026-02-05 00:20:14 | INFO  | Task e7fa34e8-ed4e-4a8f-9810-7e19ca521d6e (resolvconf) was prepared for execution. 2026-02-05 00:20:14.050875 | orchestrator | 2026-02-05 00:20:14 | INFO  | It takes a moment until task e7fa34e8-ed4e-4a8f-9810-7e19ca521d6e (resolvconf) has been started and output is visible here. 2026-02-05 00:20:27.643097 | orchestrator | 2026-02-05 00:20:27.643197 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-05 00:20:27.643211 | orchestrator | 2026-02-05 00:20:27.643221 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:20:27.643230 | orchestrator | Thursday 05 February 2026 00:20:18 +0000 (0:00:00.155) 0:00:00.155 ***** 2026-02-05 00:20:27.643239 | orchestrator | ok: [testbed-manager] 2026-02-05 00:20:27.643248 | orchestrator | 2026-02-05 00:20:27.643257 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-05 00:20:27.643267 | orchestrator | Thursday 05 February 2026 00:20:21 +0000 (0:00:03.682) 0:00:03.838 ***** 2026-02-05 00:20:27.643276 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:20:27.643286 | orchestrator | 2026-02-05 00:20:27.643295 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-05 00:20:27.643303 | orchestrator | Thursday 05 February 2026 00:20:21 +0000 (0:00:00.072) 0:00:03.910 ***** 2026-02-05 00:20:27.643312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-05 00:20:27.643322 | orchestrator | 2026-02-05 00:20:27.643331 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-05 00:20:27.643340 | orchestrator | Thursday 05 February 2026 00:20:21 +0000 (0:00:00.087) 0:00:03.998 ***** 2026-02-05 00:20:27.643365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 00:20:27.643375 | orchestrator | 2026-02-05 00:20:27.643383 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-05 00:20:27.643392 | orchestrator | Thursday 05 February 2026 00:20:21 +0000 (0:00:00.076) 0:00:04.074 ***** 2026-02-05 00:20:27.643401 | orchestrator | ok: [testbed-manager] 2026-02-05 00:20:27.643410 | orchestrator | 2026-02-05 00:20:27.643418 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-05 00:20:27.643427 | orchestrator | Thursday 05 February 2026 00:20:23 +0000 (0:00:01.067) 0:00:05.141 ***** 2026-02-05 00:20:27.643436 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:20:27.643444 | orchestrator | 2026-02-05 00:20:27.643453 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-05 00:20:27.643462 | orchestrator | Thursday 05 February 2026 00:20:23 +0000 (0:00:00.067) 0:00:05.209 ***** 2026-02-05 00:20:27.643470 | orchestrator | ok: [testbed-manager] 2026-02-05 00:20:27.643499 | orchestrator | 2026-02-05 00:20:27.643508 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-05 00:20:27.643517 | orchestrator | Thursday 05 February 2026 00:20:23 +0000 (0:00:00.490) 0:00:05.700 ***** 2026-02-05 00:20:27.643526 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:20:27.643534 | orchestrator | 2026-02-05 00:20:27.643543 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-05 00:20:27.643553 | orchestrator | Thursday 05 February 2026 00:20:23 +0000 (0:00:00.085) 0:00:05.785 ***** 2026-02-05 00:20:27.643561 | orchestrator | changed: [testbed-manager] 2026-02-05 00:20:27.643570 | orchestrator | 2026-02-05 00:20:27.643578 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-05 00:20:27.643587 | orchestrator | Thursday 05 February 2026 00:20:24 +0000 (0:00:00.536) 0:00:06.322 ***** 2026-02-05 00:20:27.643596 | orchestrator | changed: [testbed-manager] 2026-02-05 00:20:27.643604 | orchestrator | 2026-02-05 00:20:27.643614 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-05 00:20:27.643624 | orchestrator | Thursday 05 February 2026 00:20:25 +0000 (0:00:01.068) 0:00:07.391 ***** 2026-02-05 00:20:27.643634 | orchestrator | ok: [testbed-manager] 2026-02-05 00:20:27.643645 | orchestrator | 2026-02-05 00:20:27.643655 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-05 00:20:27.643666 | orchestrator | Thursday 05 February 2026 00:20:26 +0000 (0:00:00.948) 0:00:08.339 ***** 2026-02-05 00:20:27.643676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-05 00:20:27.643686 | orchestrator | 2026-02-05 00:20:27.643697 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-05 00:20:27.643707 | orchestrator | Thursday 05 February 2026 00:20:26 +0000 (0:00:00.074) 0:00:08.414 ***** 2026-02-05 00:20:27.643717 | orchestrator | changed: [testbed-manager] 2026-02-05 00:20:27.643727 | orchestrator | 2026-02-05 00:20:27.643737 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:20:27.643749 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:20:27.643758 | orchestrator | 2026-02-05 00:20:27.643766 | orchestrator | 2026-02-05 00:20:27.643775 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:20:27.643783 | orchestrator | Thursday 05 February 2026 00:20:27 +0000 (0:00:01.091) 0:00:09.505 ***** 2026-02-05 00:20:27.643792 | orchestrator | =============================================================================== 2026-02-05 00:20:27.643800 | orchestrator | Gathering Facts --------------------------------------------------------- 3.68s 2026-02-05 00:20:27.643809 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.09s 2026-02-05 00:20:27.643818 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.07s 2026-02-05 00:20:27.643826 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.07s 2026-02-05 00:20:27.643835 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2026-02-05 00:20:27.643843 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2026-02-05 00:20:27.643868 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2026-02-05 00:20:27.643877 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-02-05 00:20:27.643885 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-02-05 00:20:27.643894 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-02-05 00:20:27.643903 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-02-05 00:20:27.643911 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-02-05 00:20:27.643927 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-02-05 00:20:27.920398 | orchestrator | + osism apply sshconfig 2026-02-05 00:20:39.824306 | orchestrator | 2026-02-05 00:20:39 | INFO  | Task ef14d5c6-e95f-48e7-9998-fd1462c42c71 (sshconfig) was prepared for execution. 2026-02-05 00:20:39.824391 | orchestrator | 2026-02-05 00:20:39 | INFO  | It takes a moment until task ef14d5c6-e95f-48e7-9998-fd1462c42c71 (sshconfig) has been started and output is visible here. 2026-02-05 00:20:50.349853 | orchestrator | 2026-02-05 00:20:50.350086 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-05 00:20:50.350104 | orchestrator | 2026-02-05 00:20:50.350129 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-05 00:20:50.350136 | orchestrator | Thursday 05 February 2026 00:20:43 +0000 (0:00:00.140) 0:00:00.140 ***** 2026-02-05 00:20:50.350142 | orchestrator | ok: [testbed-manager] 2026-02-05 00:20:50.350149 | orchestrator | 2026-02-05 00:20:50.350155 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-05 00:20:50.350161 | orchestrator | Thursday 05 February 2026 00:20:44 +0000 (0:00:00.529) 0:00:00.669 ***** 2026-02-05 00:20:50.350167 | orchestrator | changed: [testbed-manager] 2026-02-05 00:20:50.350173 | orchestrator | 2026-02-05 00:20:50.350180 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-05 00:20:50.350185 | orchestrator | Thursday 05 February 2026 00:20:44 +0000 (0:00:00.460) 0:00:01.129 ***** 2026-02-05 00:20:50.350191 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-05 00:20:50.350198 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-05 00:20:50.350208 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-05 00:20:50.350217 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-05 00:20:50.350225 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-05 00:20:50.350234 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-05 00:20:50.350243 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-05 00:20:50.350254 | orchestrator | 2026-02-05 00:20:50.350264 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-05 00:20:50.350274 | orchestrator | Thursday 05 February 2026 00:20:49 +0000 (0:00:05.090) 0:00:06.220 ***** 2026-02-05 00:20:50.350282 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:20:50.350288 | orchestrator | 2026-02-05 00:20:50.350294 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-05 00:20:50.350300 | orchestrator | Thursday 05 February 2026 00:20:49 +0000 (0:00:00.052) 0:00:06.272 ***** 2026-02-05 00:20:50.350305 | orchestrator | changed: [testbed-manager] 2026-02-05 00:20:50.350311 | orchestrator | 2026-02-05 00:20:50.350317 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:20:50.350324 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:20:50.350331 | orchestrator | 2026-02-05 00:20:50.350337 | orchestrator | 2026-02-05 00:20:50.350342 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:20:50.350352 | orchestrator | Thursday 05 February 2026 00:20:50 +0000 (0:00:00.500) 0:00:06.773 ***** 2026-02-05 00:20:50.350361 | orchestrator | =============================================================================== 2026-02-05 00:20:50.350371 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.09s 2026-02-05 00:20:50.350380 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.53s 2026-02-05 00:20:50.350389 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.50s 2026-02-05 00:20:50.350399 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.46s 2026-02-05 00:20:50.350408 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.05s 2026-02-05 00:20:50.551339 | orchestrator | + osism apply known-hosts 2026-02-05 00:21:02.285866 | orchestrator | 2026-02-05 00:21:02 | INFO  | Task 7ca43a3f-d81b-4300-928d-dc0ba2f9789b (known-hosts) was prepared for execution. 2026-02-05 00:21:02.286003 | orchestrator | 2026-02-05 00:21:02 | INFO  | It takes a moment until task 7ca43a3f-d81b-4300-928d-dc0ba2f9789b (known-hosts) has been started and output is visible here. 2026-02-05 00:21:18.159630 | orchestrator | 2026-02-05 00:21:18.159729 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-05 00:21:18.159746 | orchestrator | 2026-02-05 00:21:18.159758 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-05 00:21:18.159770 | orchestrator | Thursday 05 February 2026 00:21:06 +0000 (0:00:00.160) 0:00:00.160 ***** 2026-02-05 00:21:18.159782 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-05 00:21:18.159794 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-05 00:21:18.159804 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-05 00:21:18.159816 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-05 00:21:18.159827 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-05 00:21:18.159838 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-05 00:21:18.159849 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-05 00:21:18.159859 | orchestrator | 2026-02-05 00:21:18.159870 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-05 00:21:18.159882 | orchestrator | Thursday 05 February 2026 00:21:12 +0000 (0:00:05.842) 0:00:06.002 ***** 2026-02-05 00:21:18.159895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-05 00:21:18.159908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-05 00:21:18.159919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-05 00:21:18.159930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-05 00:21:18.159941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-05 00:21:18.160037 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-05 00:21:18.160052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-05 00:21:18.160063 | orchestrator | 2026-02-05 00:21:18.160074 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:18.160085 | orchestrator | Thursday 05 February 2026 00:21:12 +0000 (0:00:00.137) 0:00:06.140 ***** 2026-02-05 00:21:18.160096 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFSHzWl/INo9rQGinyx9gsMVqq5NMt/v7/kZ50O31YpG) 2026-02-05 00:21:18.160119 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsGvepld8iL7rB0qN4UU71ExJSx0TEN5oomCZvBUU++Je7Z0SpLuAL/HJhpb7BmqsjlevW9QZVItAWyTajwt3f9ChwC6qpicMVSsNjnxkSuQ6fzvARXlhG4uJJtM8cZTS4OWpjnAzFkTezEfFHyvWkv0+YK8c83slKOrFZMmfd/Ah78ukcxxZT7Fe2UJblXz50ObtO0mdiJDR9SOhhFqq7cv/U86EvIG1e9NEViza+hJTeVLkEvg5APZa/w8o49p4KZwGxNYSmbw28oPThOgRU1yVG8G/A5Zh3FzX8dM8MBeKs19RNdiCl9s/QBKTZN4ECVxSerXzWN4v9PDPe1gyZ8keoLy96lwCnzESG+VamGSvn+kbR5344+L53k3vhQER5nz/nlColPwpB+a7csAZtXZdX/4Z98ZzejCKP9lN3zYjR6MU7BLa8iyQl6bza04Gs4LJlUuRbaxAVnBEmW2dVVNlt3cFYjhrsYVh6/Rdi01cgCcpJsr+rt1sF+8Kd2ts=) 2026-02-05 00:21:18.160163 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAipe7L6skXd4HftXo6XPF36MnapciLQgBpxuclAtZWU+2EvfG9SDCfXXSKajqlyvL9QQjHWD0AcxA2gPY1Frkc=) 2026-02-05 00:21:18.160179 | orchestrator | 2026-02-05 00:21:18.160192 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:18.160204 | orchestrator | Thursday 05 February 2026 00:21:13 +0000 (0:00:01.085) 0:00:07.225 ***** 2026-02-05 00:21:18.160237 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDv2dlEBfVbAPTW/wyE4qc6WfY/mnWA9612A2QkNlix3HehTCLq9n5aWN6gYn+VQsCmTlrq9VZ+s8peK16PnJIVCTplBF1tIb+tLk6OSI8A4BKsnEN61fT6CAksTFOrIvFOK0G4qIoBu+6aDu4KBfvOpa3yaQU4mINo7AWcRzJ75NZh7G1Kc8eFfwa6XGhrJmrhobOQVljKtqIiIEezZhkZfiLWwQk0p5MSmlvXJulYzyaE1ifHfQkZCxtEuBdiSsjZtm5k9KNArD3UZRK21AMKYmBj7F9x5g/yKzS/CQbOL738WlFkUu36bfMVQCXM1qBR/E4Dc6OJOzB6eDF3UDo5xCKZAVRkk0i+OPksW3hZn0BtLwCgvBhF9/WFnXWNUvQ2SZ0sn80Hvdh1Xe4WlwRvmWCLIgJymhy05Kxd9CV6wdadQzj4S1gasx8cvF8sL/l7KIw3xoF9d/YrmdoNCcJomY0KTyhQ1O8XwmxBCPbBYVhjO/D/7LYQ26g3rfNd5Sk=) 2026-02-05 00:21:18.160251 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOmp6Xgr6dgKE6WeSFbrxuA+M2/RyGbbnb0axRQYkVQqN9FEUOaL/iFEcuAgo2sN+Lcahhefx57YcbmapJaTOhs=) 2026-02-05 00:21:18.160266 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMDA98Bop5wrIRMP72ESKS2SjuTnDzzbusfT5lx9ROMt) 2026-02-05 00:21:18.160278 | orchestrator | 2026-02-05 00:21:18.160291 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:18.160310 | orchestrator | Thursday 05 February 2026 00:21:14 +0000 (0:00:00.983) 0:00:08.208 ***** 2026-02-05 00:21:18.160331 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwQFai26fHot0xeh1xXnFd8M7OxRPlTUOg6xK6a5ZagrOaLvyo5SVxD8ZkmvpNj2H3WmrTI09F2vmxI38SslGjpvKnTDu8CWlMgc4FdYRbMsKsoNrockSI7nG18p+JAAusmXom/MXebnGMQImTBWvIYZib1IetD7XzI7W1Dh0+ggUk2CjEIm3z70Hk4lXTfwpx1cpYdOdMPg99IVdHurz5eGsp7vUbdD1GbGEfpU10Yx/eob9SUhIvwV90vMpvo/O+0MZkcN56wwvkKx5YMz8A2dDkvFfzZzt62sMHNer8qr6/FeO0WrQaQ8S7MAZPLPRxx+uYU6opijtvLo5a1eCbnUm5iEYBLjJGXXrvWGkke8Uguww4P+7CXfgu5H5rnrS79Jr0oqZ/myvYiIkcVEUVRZWeCzLWDWaLH/fGYpsVVIbw7s6Lsi1cLAJSgOILOwhaDd5knQdQx+PLhZclk+0TkCE/lLL6kfDmgaSDPVMOyNwb28AemzMZ+Qur5gH8Bqs=) 2026-02-05 00:21:18.160350 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIUQTmeVRmMvmmfpUT3Qebf773UL9FB2UFwy+yQzatXU81KLJbtL0TUdWO2L9DMGNHloEEy178WinL5aWTSME1g=) 2026-02-05 00:21:18.160368 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEdA4jeFUu8HovoLxkhdiCVXq2HIW0ng+K+xhKe1NWHL) 2026-02-05 00:21:18.160387 | orchestrator | 2026-02-05 00:21:18.160405 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:18.160436 | orchestrator | Thursday 05 February 2026 00:21:15 +0000 (0:00:00.953) 0:00:09.162 ***** 2026-02-05 00:21:18.160464 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDVMVCIvXkba4+9JLsEUUlEVeWPIaYFZR0ZxOjD5VCfa) 2026-02-05 00:21:18.160491 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEscwEl2sUeq4borfkgbSwQgbTrl5XwDNt4ZVovg3gZFwokPm54+vAl4p+hd8lfQopF2IzI7xb3c6rQ2PEF5B9cRqS+jAEQ4HRWdC63MEGKROHGfS+Sa8oyxh2fgGLvI4S/8hni8cDuexg76Nx0Xt4nwIRPpzBUpKCeePV2eu2ByrCGTI64FhsEHL+tfB2J55eCKg5MIdFY5+pHpgLGgqbwXyoLnVr7I6E0cdoFlvrEtJu3Jh/W43cGpbQlXsrrCif9U9AxNZxEdbNUPGsAwNsdpxZIT13oV76v3yrKJTEJjMXN8LPOOHjTVPPhVwH9xrun3buswYBz6mmSdODH0C1Ldq9CWo7NzDmgcvoQfo67blXKQgvLysVYhDDhsJ6/igK8gejSSMrjshlJwWSYdvKCMHf7F4z8jYrJEPpFCZ/bgdxTfMJ1SUUSUpgkK8Zs2/mAfSKjL7U2HAzBPO63r+euxmloXZcTBmKn5N2SHqy+YLcy2NgHj03VWh1dX4VAbs=) 2026-02-05 00:21:18.160538 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC3OCk0DVE0DzKLN+pi9FuRC1UynET8CYb/pOVQi7fcyOPW1Ff8NdfKJmgiG8A6zCKTmrzBxEZ0Z6BC2IhSl6cw=) 2026-02-05 00:21:18.160565 | orchestrator | 2026-02-05 00:21:18.160593 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:18.160621 | orchestrator | Thursday 05 February 2026 00:21:16 +0000 (0:00:00.970) 0:00:10.132 ***** 2026-02-05 00:21:18.160853 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFi2L/bvdgnsAe7OOyX6B3ZNbEk5s9QUGfOn6+xCmuju8YJVsFtd4hzCTJkOO+QANESgyaQgN8dlpIzWikkf6bc=) 2026-02-05 00:21:18.160889 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZ6FoW6KqfX1aevBvFYHgNgtZFT8dpLDTZQqJy761Vj5IFXXQzcSbhqYpE+a0rp6unhIyRA5Un1Qw1SMwGQby494ZU1ZTh8xpN0XvXGkdYN/bwUR7r6a1gqZonDd6wK6UN4/H3mmFtBu+/toZWZIRO4V8odWzSTfTqfq4wGV7OfO3nOnK+GfK1eUQjbkUUo2CMSAVbvD0redmw2neArvtuKL8EfRak3g1TQ/zmxBVTTsx4T2L87F2fm+Ae4Hospur8jClWqtdxLaF8IErBxz78mOufxwbC71lsqYMsJlB+dpl6MFgFnkkOFBULv9R534rLU7V/QAm1hHJj/Ma0Darx1AdSY0MJVf614yM37I72Xz8BNvp2WJDRIsm05MHot8ov/3uhiYKHZpNjAgxkw6W0zRmBZM5q5DGv9VQOtmdw3GXFGSP88F16dUV6F+Flnv0/KSFSRdahWRtLhNEnoY4a/QilEo3+F1vN+/wZGN8FOUMJG3NeKJdULnlRpTvbqgM=) 2026-02-05 00:21:18.160916 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILvRAyFvYMSydm7yHrdYlb8DLqqAcnWRpF3Npei4+Vpr) 2026-02-05 00:21:18.160944 | orchestrator | 2026-02-05 00:21:18.160998 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:18.161023 | orchestrator | Thursday 05 February 2026 00:21:17 +0000 (0:00:00.963) 0:00:11.095 ***** 2026-02-05 00:21:18.161066 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI1ergH/6TSFGtZWNrFrwpSekrXcQDoHRW3Q0k9Z5GTMc++9Ut1U9caCU9MZZP65QuVwMhr9GffiunHuJ/V+vDQ=) 2026-02-05 00:21:28.687697 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCGC+8/N/jvVatqSU/xIUoc0lOv3aQoMn048GQpoIOgNOOt3omxn8bvomQ0ElyOUf3LFPBHOm3ta8vCp3UzJogQZy9TYMHdhSgQDirWBSMsMVynJh/+zgKc7jybiJl8jICrFF7XFq1mAMIJB2pD9K/MOmSofNPLZKbX44Ale8gR1OrgBS/nXcpjYb0n6tBTRwyIr/TsdbDY/yWpK/zN+cmdGJadoTt7EwNrFKYdx1jmvhHigjb7oXC9lzPcjfiAMY6NMyheD26XG/+LTqSxNXRQUsWzYB7vrV5GZVRYQdj+zfsuB4qlf7LkDB0vS2qXMI22H7G+PZVtawCnDrSg5sXffeY803ukBthxY/ecK44hfM2DhO0rQckgPewCbIIdLoRrNR6amiDxJk7XDhinMsLBCJGxQ8R9BOBVH0/ZmlhdYCmdShMd0rpVQGS1fAAA+SyD37Kno6BcWmi9hWbjdgyezyUc24OdE5NvOTkhnu90S0t0w8ykGh4+U1tRQi4lXek=) 2026-02-05 00:21:28.687793 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC1vCvMxnVhulrdsF0RXVJ2cteG0WAbRqmOTwGtyF/oz) 2026-02-05 00:21:28.687806 | orchestrator | 2026-02-05 00:21:28.687815 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:28.687824 | orchestrator | Thursday 05 February 2026 00:21:18 +0000 (0:00:00.984) 0:00:12.080 ***** 2026-02-05 00:21:28.687832 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHoa/4buKQBhzalaizWrtB6/tZ48QA9k2bwcEIh/NkLGUU6CNRjYQsGDUodg1EASDnvbodNlaNr4yMF591cgxOQ=) 2026-02-05 00:21:28.687842 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSAIPhiJAUs5t1WsIYlLad3fmKDc7oCxVAG94RNpssaGV7Eo3SG1zYvDC4niDdvjYSXpNcBGlb6KU3PB3JQ4QnnrjeqfdAmYvVOhxxO2EzmwCU3kO4XrWGuj98WFLXnn8fRYqNE2/O3huPZWLWzooV3pkqcZWJCjWrhde1WDYO1jtHMEfCZuGZo4qezTAtmgiEAlyw3fvbZ8JaEC6TsqH5gLEyILy4c/97cZmwj5aQBaDmsnJzzMYYRW74UxtAZqzfJn6/cH9q7ImmUTGutt/3Sd4f5QwQ7auKueNYrTMPUfD04HcbixcuNaNQrD83ghSZzOjartzdENzrMDPMJfLIeDwLqxTJUloTfcdPlG4zu2kUIyIcVwXwzPkFWqNEWbDZjWMXsTl2uF6LrO37B/cU2F/TbkoE2f154B9Pk98HUmPWeBfhjGEMzluNnTlTUvscYByjltbl1WKRm8NSqo62U8vMtgizoASpm/gt4vSrtdgx7nN00tD9OdUY0LOpgTk=) 2026-02-05 00:21:28.687869 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIObLoFxJIdUFaQt0wX/3AFd2kyqOOEuIzTHBtnARpJac) 2026-02-05 00:21:28.687877 | orchestrator | 2026-02-05 00:21:28.687886 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-05 00:21:28.687895 | orchestrator | Thursday 05 February 2026 00:21:19 +0000 (0:00:01.039) 0:00:13.120 ***** 2026-02-05 00:21:28.687903 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-05 00:21:28.687911 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-05 00:21:28.687919 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-05 00:21:28.687926 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-05 00:21:28.687933 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-05 00:21:28.687941 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-05 00:21:28.687948 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-05 00:21:28.687955 | orchestrator | 2026-02-05 00:21:28.687963 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-05 00:21:28.687973 | orchestrator | Thursday 05 February 2026 00:21:24 +0000 (0:00:05.197) 0:00:18.317 ***** 2026-02-05 00:21:28.688032 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-05 00:21:28.688043 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-05 00:21:28.688051 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-05 00:21:28.688059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-05 00:21:28.688066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-05 00:21:28.688074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-05 00:21:28.688081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-05 00:21:28.688089 | orchestrator | 2026-02-05 00:21:28.688111 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:28.688120 | orchestrator | Thursday 05 February 2026 00:21:24 +0000 (0:00:00.170) 0:00:18.487 ***** 2026-02-05 00:21:28.688137 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsGvepld8iL7rB0qN4UU71ExJSx0TEN5oomCZvBUU++Je7Z0SpLuAL/HJhpb7BmqsjlevW9QZVItAWyTajwt3f9ChwC6qpicMVSsNjnxkSuQ6fzvARXlhG4uJJtM8cZTS4OWpjnAzFkTezEfFHyvWkv0+YK8c83slKOrFZMmfd/Ah78ukcxxZT7Fe2UJblXz50ObtO0mdiJDR9SOhhFqq7cv/U86EvIG1e9NEViza+hJTeVLkEvg5APZa/w8o49p4KZwGxNYSmbw28oPThOgRU1yVG8G/A5Zh3FzX8dM8MBeKs19RNdiCl9s/QBKTZN4ECVxSerXzWN4v9PDPe1gyZ8keoLy96lwCnzESG+VamGSvn+kbR5344+L53k3vhQER5nz/nlColPwpB+a7csAZtXZdX/4Z98ZzejCKP9lN3zYjR6MU7BLa8iyQl6bza04Gs4LJlUuRbaxAVnBEmW2dVVNlt3cFYjhrsYVh6/Rdi01cgCcpJsr+rt1sF+8Kd2ts=) 2026-02-05 00:21:28.688149 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAipe7L6skXd4HftXo6XPF36MnapciLQgBpxuclAtZWU+2EvfG9SDCfXXSKajqlyvL9QQjHWD0AcxA2gPY1Frkc=) 2026-02-05 00:21:28.688164 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFSHzWl/INo9rQGinyx9gsMVqq5NMt/v7/kZ50O31YpG) 2026-02-05 00:21:28.688172 | orchestrator | 2026-02-05 00:21:28.688179 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:28.688187 | orchestrator | Thursday 05 February 2026 00:21:25 +0000 (0:00:01.021) 0:00:19.509 ***** 2026-02-05 00:21:28.688195 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOmp6Xgr6dgKE6WeSFbrxuA+M2/RyGbbnb0axRQYkVQqN9FEUOaL/iFEcuAgo2sN+Lcahhefx57YcbmapJaTOhs=) 2026-02-05 00:21:28.688203 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMDA98Bop5wrIRMP72ESKS2SjuTnDzzbusfT5lx9ROMt) 2026-02-05 00:21:28.688211 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDv2dlEBfVbAPTW/wyE4qc6WfY/mnWA9612A2QkNlix3HehTCLq9n5aWN6gYn+VQsCmTlrq9VZ+s8peK16PnJIVCTplBF1tIb+tLk6OSI8A4BKsnEN61fT6CAksTFOrIvFOK0G4qIoBu+6aDu4KBfvOpa3yaQU4mINo7AWcRzJ75NZh7G1Kc8eFfwa6XGhrJmrhobOQVljKtqIiIEezZhkZfiLWwQk0p5MSmlvXJulYzyaE1ifHfQkZCxtEuBdiSsjZtm5k9KNArD3UZRK21AMKYmBj7F9x5g/yKzS/CQbOL738WlFkUu36bfMVQCXM1qBR/E4Dc6OJOzB6eDF3UDo5xCKZAVRkk0i+OPksW3hZn0BtLwCgvBhF9/WFnXWNUvQ2SZ0sn80Hvdh1Xe4WlwRvmWCLIgJymhy05Kxd9CV6wdadQzj4S1gasx8cvF8sL/l7KIw3xoF9d/YrmdoNCcJomY0KTyhQ1O8XwmxBCPbBYVhjO/D/7LYQ26g3rfNd5Sk=) 2026-02-05 00:21:28.688219 | orchestrator | 2026-02-05 00:21:28.688227 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:28.688234 | orchestrator | Thursday 05 February 2026 00:21:26 +0000 (0:00:01.006) 0:00:20.515 ***** 2026-02-05 00:21:28.688243 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIUQTmeVRmMvmmfpUT3Qebf773UL9FB2UFwy+yQzatXU81KLJbtL0TUdWO2L9DMGNHloEEy178WinL5aWTSME1g=) 2026-02-05 00:21:28.688250 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEdA4jeFUu8HovoLxkhdiCVXq2HIW0ng+K+xhKe1NWHL) 2026-02-05 00:21:28.688259 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwQFai26fHot0xeh1xXnFd8M7OxRPlTUOg6xK6a5ZagrOaLvyo5SVxD8ZkmvpNj2H3WmrTI09F2vmxI38SslGjpvKnTDu8CWlMgc4FdYRbMsKsoNrockSI7nG18p+JAAusmXom/MXebnGMQImTBWvIYZib1IetD7XzI7W1Dh0+ggUk2CjEIm3z70Hk4lXTfwpx1cpYdOdMPg99IVdHurz5eGsp7vUbdD1GbGEfpU10Yx/eob9SUhIvwV90vMpvo/O+0MZkcN56wwvkKx5YMz8A2dDkvFfzZzt62sMHNer8qr6/FeO0WrQaQ8S7MAZPLPRxx+uYU6opijtvLo5a1eCbnUm5iEYBLjJGXXrvWGkke8Uguww4P+7CXfgu5H5rnrS79Jr0oqZ/myvYiIkcVEUVRZWeCzLWDWaLH/fGYpsVVIbw7s6Lsi1cLAJSgOILOwhaDd5knQdQx+PLhZclk+0TkCE/lLL6kfDmgaSDPVMOyNwb28AemzMZ+Qur5gH8Bqs=) 2026-02-05 00:21:28.688267 | orchestrator | 2026-02-05 00:21:28.688275 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:28.688284 | orchestrator | Thursday 05 February 2026 00:21:27 +0000 (0:00:01.049) 0:00:21.564 ***** 2026-02-05 00:21:28.688291 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDVMVCIvXkba4+9JLsEUUlEVeWPIaYFZR0ZxOjD5VCfa) 2026-02-05 00:21:28.688311 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEscwEl2sUeq4borfkgbSwQgbTrl5XwDNt4ZVovg3gZFwokPm54+vAl4p+hd8lfQopF2IzI7xb3c6rQ2PEF5B9cRqS+jAEQ4HRWdC63MEGKROHGfS+Sa8oyxh2fgGLvI4S/8hni8cDuexg76Nx0Xt4nwIRPpzBUpKCeePV2eu2ByrCGTI64FhsEHL+tfB2J55eCKg5MIdFY5+pHpgLGgqbwXyoLnVr7I6E0cdoFlvrEtJu3Jh/W43cGpbQlXsrrCif9U9AxNZxEdbNUPGsAwNsdpxZIT13oV76v3yrKJTEJjMXN8LPOOHjTVPPhVwH9xrun3buswYBz6mmSdODH0C1Ldq9CWo7NzDmgcvoQfo67blXKQgvLysVYhDDhsJ6/igK8gejSSMrjshlJwWSYdvKCMHf7F4z8jYrJEPpFCZ/bgdxTfMJ1SUUSUpgkK8Zs2/mAfSKjL7U2HAzBPO63r+euxmloXZcTBmKn5N2SHqy+YLcy2NgHj03VWh1dX4VAbs=) 2026-02-05 00:21:32.901058 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC3OCk0DVE0DzKLN+pi9FuRC1UynET8CYb/pOVQi7fcyOPW1Ff8NdfKJmgiG8A6zCKTmrzBxEZ0Z6BC2IhSl6cw=) 2026-02-05 00:21:32.901193 | orchestrator | 2026-02-05 00:21:32.901211 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:32.901224 | orchestrator | Thursday 05 February 2026 00:21:28 +0000 (0:00:01.041) 0:00:22.606 ***** 2026-02-05 00:21:32.901238 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZ6FoW6KqfX1aevBvFYHgNgtZFT8dpLDTZQqJy761Vj5IFXXQzcSbhqYpE+a0rp6unhIyRA5Un1Qw1SMwGQby494ZU1ZTh8xpN0XvXGkdYN/bwUR7r6a1gqZonDd6wK6UN4/H3mmFtBu+/toZWZIRO4V8odWzSTfTqfq4wGV7OfO3nOnK+GfK1eUQjbkUUo2CMSAVbvD0redmw2neArvtuKL8EfRak3g1TQ/zmxBVTTsx4T2L87F2fm+Ae4Hospur8jClWqtdxLaF8IErBxz78mOufxwbC71lsqYMsJlB+dpl6MFgFnkkOFBULv9R534rLU7V/QAm1hHJj/Ma0Darx1AdSY0MJVf614yM37I72Xz8BNvp2WJDRIsm05MHot8ov/3uhiYKHZpNjAgxkw6W0zRmBZM5q5DGv9VQOtmdw3GXFGSP88F16dUV6F+Flnv0/KSFSRdahWRtLhNEnoY4a/QilEo3+F1vN+/wZGN8FOUMJG3NeKJdULnlRpTvbqgM=) 2026-02-05 00:21:32.901252 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFi2L/bvdgnsAe7OOyX6B3ZNbEk5s9QUGfOn6+xCmuju8YJVsFtd4hzCTJkOO+QANESgyaQgN8dlpIzWikkf6bc=) 2026-02-05 00:21:32.901265 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILvRAyFvYMSydm7yHrdYlb8DLqqAcnWRpF3Npei4+Vpr) 2026-02-05 00:21:32.901277 | orchestrator | 2026-02-05 00:21:32.901287 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:32.901298 | orchestrator | Thursday 05 February 2026 00:21:29 +0000 (0:00:01.049) 0:00:23.655 ***** 2026-02-05 00:21:32.901309 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI1ergH/6TSFGtZWNrFrwpSekrXcQDoHRW3Q0k9Z5GTMc++9Ut1U9caCU9MZZP65QuVwMhr9GffiunHuJ/V+vDQ=) 2026-02-05 00:21:32.901321 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCGC+8/N/jvVatqSU/xIUoc0lOv3aQoMn048GQpoIOgNOOt3omxn8bvomQ0ElyOUf3LFPBHOm3ta8vCp3UzJogQZy9TYMHdhSgQDirWBSMsMVynJh/+zgKc7jybiJl8jICrFF7XFq1mAMIJB2pD9K/MOmSofNPLZKbX44Ale8gR1OrgBS/nXcpjYb0n6tBTRwyIr/TsdbDY/yWpK/zN+cmdGJadoTt7EwNrFKYdx1jmvhHigjb7oXC9lzPcjfiAMY6NMyheD26XG/+LTqSxNXRQUsWzYB7vrV5GZVRYQdj+zfsuB4qlf7LkDB0vS2qXMI22H7G+PZVtawCnDrSg5sXffeY803ukBthxY/ecK44hfM2DhO0rQckgPewCbIIdLoRrNR6amiDxJk7XDhinMsLBCJGxQ8R9BOBVH0/ZmlhdYCmdShMd0rpVQGS1fAAA+SyD37Kno6BcWmi9hWbjdgyezyUc24OdE5NvOTkhnu90S0t0w8ykGh4+U1tRQi4lXek=) 2026-02-05 00:21:32.901332 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC1vCvMxnVhulrdsF0RXVJ2cteG0WAbRqmOTwGtyF/oz) 2026-02-05 00:21:32.901343 | orchestrator | 2026-02-05 00:21:32.901353 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:21:32.901364 | orchestrator | Thursday 05 February 2026 00:21:30 +0000 (0:00:01.026) 0:00:24.682 ***** 2026-02-05 00:21:32.901393 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHoa/4buKQBhzalaizWrtB6/tZ48QA9k2bwcEIh/NkLGUU6CNRjYQsGDUodg1EASDnvbodNlaNr4yMF591cgxOQ=) 2026-02-05 00:21:32.901405 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSAIPhiJAUs5t1WsIYlLad3fmKDc7oCxVAG94RNpssaGV7Eo3SG1zYvDC4niDdvjYSXpNcBGlb6KU3PB3JQ4QnnrjeqfdAmYvVOhxxO2EzmwCU3kO4XrWGuj98WFLXnn8fRYqNE2/O3huPZWLWzooV3pkqcZWJCjWrhde1WDYO1jtHMEfCZuGZo4qezTAtmgiEAlyw3fvbZ8JaEC6TsqH5gLEyILy4c/97cZmwj5aQBaDmsnJzzMYYRW74UxtAZqzfJn6/cH9q7ImmUTGutt/3Sd4f5QwQ7auKueNYrTMPUfD04HcbixcuNaNQrD83ghSZzOjartzdENzrMDPMJfLIeDwLqxTJUloTfcdPlG4zu2kUIyIcVwXwzPkFWqNEWbDZjWMXsTl2uF6LrO37B/cU2F/TbkoE2f154B9Pk98HUmPWeBfhjGEMzluNnTlTUvscYByjltbl1WKRm8NSqo62U8vMtgizoASpm/gt4vSrtdgx7nN00tD9OdUY0LOpgTk=) 2026-02-05 00:21:32.901417 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIObLoFxJIdUFaQt0wX/3AFd2kyqOOEuIzTHBtnARpJac) 2026-02-05 00:21:32.901428 | orchestrator | 2026-02-05 00:21:32.901438 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-05 00:21:32.901457 | orchestrator | Thursday 05 February 2026 00:21:31 +0000 (0:00:01.012) 0:00:25.694 ***** 2026-02-05 00:21:32.901468 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-05 00:21:32.901479 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-05 00:21:32.901490 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-05 00:21:32.901501 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-05 00:21:32.901530 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-05 00:21:32.901542 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-05 00:21:32.901554 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-05 00:21:32.901567 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:21:32.901579 | orchestrator | 2026-02-05 00:21:32.901592 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-05 00:21:32.901605 | orchestrator | Thursday 05 February 2026 00:21:31 +0000 (0:00:00.160) 0:00:25.854 ***** 2026-02-05 00:21:32.901617 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:21:32.901629 | orchestrator | 2026-02-05 00:21:32.901642 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-05 00:21:32.901655 | orchestrator | Thursday 05 February 2026 00:21:31 +0000 (0:00:00.061) 0:00:25.916 ***** 2026-02-05 00:21:32.901672 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:21:32.901685 | orchestrator | 2026-02-05 00:21:32.901698 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-05 00:21:32.901711 | orchestrator | Thursday 05 February 2026 00:21:32 +0000 (0:00:00.044) 0:00:25.961 ***** 2026-02-05 00:21:32.901724 | orchestrator | changed: [testbed-manager] 2026-02-05 00:21:32.901737 | orchestrator | 2026-02-05 00:21:32.901749 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:21:32.901762 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:21:32.901775 | orchestrator | 2026-02-05 00:21:32.901787 | orchestrator | 2026-02-05 00:21:32.901799 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:21:32.901812 | orchestrator | Thursday 05 February 2026 00:21:32 +0000 (0:00:00.676) 0:00:26.637 ***** 2026-02-05 00:21:32.901824 | orchestrator | =============================================================================== 2026-02-05 00:21:32.901837 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.84s 2026-02-05 00:21:32.901849 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.20s 2026-02-05 00:21:32.901863 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-05 00:21:32.901875 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-02-05 00:21:32.901888 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-02-05 00:21:32.901901 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-05 00:21:32.901913 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-05 00:21:32.901924 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-05 00:21:32.901934 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-05 00:21:32.901945 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-02-05 00:21:32.901956 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-02-05 00:21:32.901966 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-02-05 00:21:32.901977 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-02-05 00:21:32.902098 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-02-05 00:21:32.902112 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-02-05 00:21:32.902132 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-02-05 00:21:32.902143 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.68s 2026-02-05 00:21:32.902153 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-02-05 00:21:32.902164 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-02-05 00:21:32.902175 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.14s 2026-02-05 00:21:33.186291 | orchestrator | + osism apply squid 2026-02-05 00:21:45.200238 | orchestrator | 2026-02-05 00:21:45 | INFO  | Task 504deedb-e234-4b9e-9577-f72bdd2e0856 (squid) was prepared for execution. 2026-02-05 00:21:45.200343 | orchestrator | 2026-02-05 00:21:45 | INFO  | It takes a moment until task 504deedb-e234-4b9e-9577-f72bdd2e0856 (squid) has been started and output is visible here. 2026-02-05 00:23:40.269907 | orchestrator | 2026-02-05 00:23:40.270084 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-05 00:23:40.270104 | orchestrator | 2026-02-05 00:23:40.270117 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-05 00:23:40.270130 | orchestrator | Thursday 05 February 2026 00:21:48 +0000 (0:00:00.118) 0:00:00.118 ***** 2026-02-05 00:23:40.270142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 00:23:40.270154 | orchestrator | 2026-02-05 00:23:40.270165 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-05 00:23:40.270176 | orchestrator | Thursday 05 February 2026 00:21:48 +0000 (0:00:00.065) 0:00:00.184 ***** 2026-02-05 00:23:40.270187 | orchestrator | ok: [testbed-manager] 2026-02-05 00:23:40.270251 | orchestrator | 2026-02-05 00:23:40.270264 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-05 00:23:40.270275 | orchestrator | Thursday 05 February 2026 00:21:50 +0000 (0:00:01.166) 0:00:01.350 ***** 2026-02-05 00:23:40.270287 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-05 00:23:40.270298 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-05 00:23:40.270309 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-05 00:23:40.270320 | orchestrator | 2026-02-05 00:23:40.270331 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-05 00:23:40.270342 | orchestrator | Thursday 05 February 2026 00:21:51 +0000 (0:00:01.012) 0:00:02.362 ***** 2026-02-05 00:23:40.270353 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-05 00:23:40.270364 | orchestrator | 2026-02-05 00:23:40.270374 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-05 00:23:40.270385 | orchestrator | Thursday 05 February 2026 00:21:52 +0000 (0:00:00.914) 0:00:03.277 ***** 2026-02-05 00:23:40.270396 | orchestrator | ok: [testbed-manager] 2026-02-05 00:23:40.270407 | orchestrator | 2026-02-05 00:23:40.270418 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-05 00:23:40.270429 | orchestrator | Thursday 05 February 2026 00:21:52 +0000 (0:00:00.321) 0:00:03.599 ***** 2026-02-05 00:23:40.270440 | orchestrator | changed: [testbed-manager] 2026-02-05 00:23:40.270451 | orchestrator | 2026-02-05 00:23:40.270463 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-05 00:23:40.270474 | orchestrator | Thursday 05 February 2026 00:21:53 +0000 (0:00:00.784) 0:00:04.383 ***** 2026-02-05 00:23:40.270485 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-05 00:23:40.270496 | orchestrator | ok: [testbed-manager] 2026-02-05 00:23:40.270511 | orchestrator | 2026-02-05 00:23:40.270522 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-05 00:23:40.270533 | orchestrator | Thursday 05 February 2026 00:22:23 +0000 (0:00:30.608) 0:00:34.992 ***** 2026-02-05 00:23:40.270629 | orchestrator | changed: [testbed-manager] 2026-02-05 00:23:40.270642 | orchestrator | 2026-02-05 00:23:40.270653 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-05 00:23:40.270664 | orchestrator | Thursday 05 February 2026 00:22:39 +0000 (0:00:15.677) 0:00:50.669 ***** 2026-02-05 00:23:40.270675 | orchestrator | Pausing for 60 seconds 2026-02-05 00:23:40.270687 | orchestrator | changed: [testbed-manager] 2026-02-05 00:23:40.270698 | orchestrator | 2026-02-05 00:23:40.270709 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-05 00:23:40.270720 | orchestrator | Thursday 05 February 2026 00:23:39 +0000 (0:01:00.085) 0:01:50.754 ***** 2026-02-05 00:23:40.270731 | orchestrator | ok: [testbed-manager] 2026-02-05 00:23:40.270742 | orchestrator | 2026-02-05 00:23:40.270753 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-05 00:23:40.270776 | orchestrator | Thursday 05 February 2026 00:23:39 +0000 (0:00:00.057) 0:01:50.812 ***** 2026-02-05 00:23:40.270787 | orchestrator | changed: [testbed-manager] 2026-02-05 00:23:40.270797 | orchestrator | 2026-02-05 00:23:40.270826 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:23:40.270838 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:23:40.270848 | orchestrator | 2026-02-05 00:23:40.270859 | orchestrator | 2026-02-05 00:23:40.270870 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:23:40.270881 | orchestrator | Thursday 05 February 2026 00:23:40 +0000 (0:00:00.552) 0:01:51.364 ***** 2026-02-05 00:23:40.270892 | orchestrator | =============================================================================== 2026-02-05 00:23:40.270902 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-05 00:23:40.270913 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.61s 2026-02-05 00:23:40.270924 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.68s 2026-02-05 00:23:40.270935 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.17s 2026-02-05 00:23:40.270945 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.01s 2026-02-05 00:23:40.270956 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.91s 2026-02-05 00:23:40.270967 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.78s 2026-02-05 00:23:40.270978 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.55s 2026-02-05 00:23:40.270988 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.32s 2026-02-05 00:23:40.270999 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-02-05 00:23:40.271010 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-02-05 00:23:40.451374 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-05 00:23:40.451474 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-05 00:23:40.493751 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 00:23:40.493842 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-05 00:23:40.498819 | orchestrator | + set -e 2026-02-05 00:23:40.498884 | orchestrator | + NAMESPACE=kolla/release 2026-02-05 00:23:40.498907 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-05 00:23:40.501894 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-05 00:23:40.568865 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-05 00:23:40.569454 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-05 00:23:52.406862 | orchestrator | 2026-02-05 00:23:52 | INFO  | Task 3642122a-122e-4065-a9e2-4bd7b5551f82 (operator) was prepared for execution. 2026-02-05 00:23:52.406969 | orchestrator | 2026-02-05 00:23:52 | INFO  | It takes a moment until task 3642122a-122e-4065-a9e2-4bd7b5551f82 (operator) has been started and output is visible here. 2026-02-05 00:24:09.062699 | orchestrator | 2026-02-05 00:24:09.062809 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-05 00:24:09.062827 | orchestrator | 2026-02-05 00:24:09.062838 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:24:09.062851 | orchestrator | Thursday 05 February 2026 00:23:56 +0000 (0:00:00.140) 0:00:00.140 ***** 2026-02-05 00:24:09.062862 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:24:09.062874 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:24:09.062884 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:24:09.062895 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:24:09.062905 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:24:09.062916 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:24:09.062927 | orchestrator | 2026-02-05 00:24:09.062938 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-05 00:24:09.062949 | orchestrator | Thursday 05 February 2026 00:24:00 +0000 (0:00:04.256) 0:00:04.397 ***** 2026-02-05 00:24:09.062959 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:24:09.062970 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:24:09.062981 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:24:09.062991 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:24:09.063017 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:24:09.063029 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:24:09.063040 | orchestrator | 2026-02-05 00:24:09.063051 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-05 00:24:09.063061 | orchestrator | 2026-02-05 00:24:09.063072 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-05 00:24:09.063083 | orchestrator | Thursday 05 February 2026 00:24:01 +0000 (0:00:00.768) 0:00:05.165 ***** 2026-02-05 00:24:09.063094 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:24:09.063104 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:24:09.063115 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:24:09.063126 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:24:09.063137 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:24:09.063147 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:24:09.063174 | orchestrator | 2026-02-05 00:24:09.063185 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-05 00:24:09.063196 | orchestrator | Thursday 05 February 2026 00:24:01 +0000 (0:00:00.152) 0:00:05.318 ***** 2026-02-05 00:24:09.063207 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:24:09.063218 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:24:09.063229 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:24:09.063268 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:24:09.063281 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:24:09.063294 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:24:09.063306 | orchestrator | 2026-02-05 00:24:09.063319 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-05 00:24:09.063332 | orchestrator | Thursday 05 February 2026 00:24:01 +0000 (0:00:00.152) 0:00:05.471 ***** 2026-02-05 00:24:09.063345 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:24:09.063358 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:24:09.063371 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:24:09.063383 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:24:09.063396 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:24:09.063409 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:24:09.063422 | orchestrator | 2026-02-05 00:24:09.063436 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-05 00:24:09.063449 | orchestrator | Thursday 05 February 2026 00:24:02 +0000 (0:00:00.606) 0:00:06.078 ***** 2026-02-05 00:24:09.063462 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:24:09.063475 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:24:09.063487 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:24:09.063500 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:24:09.063512 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:24:09.063525 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:24:09.063538 | orchestrator | 2026-02-05 00:24:09.063552 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-05 00:24:09.063590 | orchestrator | Thursday 05 February 2026 00:24:03 +0000 (0:00:00.794) 0:00:06.872 ***** 2026-02-05 00:24:09.063603 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-05 00:24:09.063618 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-05 00:24:09.063629 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-05 00:24:09.063639 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-05 00:24:09.063650 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-05 00:24:09.063661 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-05 00:24:09.063672 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-05 00:24:09.063683 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-05 00:24:09.063693 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-05 00:24:09.063704 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-05 00:24:09.063715 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-05 00:24:09.063726 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-05 00:24:09.063737 | orchestrator | 2026-02-05 00:24:09.063748 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-05 00:24:09.063759 | orchestrator | Thursday 05 February 2026 00:24:04 +0000 (0:00:01.190) 0:00:08.063 ***** 2026-02-05 00:24:09.063769 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:24:09.063780 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:24:09.063791 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:24:09.063802 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:24:09.063813 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:24:09.063824 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:24:09.063835 | orchestrator | 2026-02-05 00:24:09.063846 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-05 00:24:09.063858 | orchestrator | Thursday 05 February 2026 00:24:05 +0000 (0:00:01.279) 0:00:09.342 ***** 2026-02-05 00:24:09.063869 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-05 00:24:09.063880 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-05 00:24:09.063891 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-05 00:24:09.063902 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:24:09.063930 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:24:09.063941 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:24:09.063953 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:24:09.063963 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:24:09.063974 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:24:09.063985 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-05 00:24:09.063996 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-05 00:24:09.064006 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-05 00:24:09.064017 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-05 00:24:09.064028 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-05 00:24:09.064039 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-05 00:24:09.064049 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:24:09.064060 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:24:09.064071 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:24:09.064082 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:24:09.064093 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:24:09.064112 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:24:09.064123 | orchestrator | 2026-02-05 00:24:09.064134 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-05 00:24:09.064145 | orchestrator | Thursday 05 February 2026 00:24:07 +0000 (0:00:01.475) 0:00:10.817 ***** 2026-02-05 00:24:09.064156 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:24:09.064167 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:24:09.064178 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:24:09.064189 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:24:09.064199 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:24:09.064210 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:24:09.064221 | orchestrator | 2026-02-05 00:24:09.064232 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-05 00:24:09.064285 | orchestrator | Thursday 05 February 2026 00:24:07 +0000 (0:00:00.137) 0:00:10.955 ***** 2026-02-05 00:24:09.064297 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:24:09.064307 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:24:09.064319 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:24:09.064329 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:24:09.064340 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:24:09.064351 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:24:09.064362 | orchestrator | 2026-02-05 00:24:09.064373 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-05 00:24:09.064384 | orchestrator | Thursday 05 February 2026 00:24:07 +0000 (0:00:00.161) 0:00:11.116 ***** 2026-02-05 00:24:09.064404 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:24:09.064416 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:24:09.064427 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:24:09.064437 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:24:09.064448 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:24:09.064459 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:24:09.064470 | orchestrator | 2026-02-05 00:24:09.064481 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-05 00:24:09.064492 | orchestrator | Thursday 05 February 2026 00:24:07 +0000 (0:00:00.559) 0:00:11.675 ***** 2026-02-05 00:24:09.064502 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:24:09.064513 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:24:09.064524 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:24:09.064535 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:24:09.064546 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:24:09.064557 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:24:09.064568 | orchestrator | 2026-02-05 00:24:09.064578 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-05 00:24:09.064589 | orchestrator | Thursday 05 February 2026 00:24:08 +0000 (0:00:00.146) 0:00:11.822 ***** 2026-02-05 00:24:09.064600 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-05 00:24:09.064611 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:24:09.064622 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 00:24:09.064633 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:24:09.064644 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 00:24:09.064655 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:24:09.064666 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 00:24:09.064677 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:24:09.064688 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-05 00:24:09.064699 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:24:09.064709 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 00:24:09.064720 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:24:09.064731 | orchestrator | 2026-02-05 00:24:09.064742 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-05 00:24:09.064753 | orchestrator | Thursday 05 February 2026 00:24:08 +0000 (0:00:00.712) 0:00:12.535 ***** 2026-02-05 00:24:09.064771 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:24:09.064782 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:24:09.064793 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:24:09.064804 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:24:09.064814 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:24:09.064825 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:24:09.064836 | orchestrator | 2026-02-05 00:24:09.064847 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-05 00:24:09.064858 | orchestrator | Thursday 05 February 2026 00:24:08 +0000 (0:00:00.132) 0:00:12.667 ***** 2026-02-05 00:24:09.064869 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:24:09.064880 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:24:09.064891 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:24:09.064901 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:24:09.064920 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:24:10.306734 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:24:10.306853 | orchestrator | 2026-02-05 00:24:10.306876 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-05 00:24:10.306889 | orchestrator | Thursday 05 February 2026 00:24:09 +0000 (0:00:00.125) 0:00:12.792 ***** 2026-02-05 00:24:10.306899 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:24:10.306909 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:24:10.306919 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:24:10.306928 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:24:10.306938 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:24:10.306947 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:24:10.306957 | orchestrator | 2026-02-05 00:24:10.306967 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-05 00:24:10.306976 | orchestrator | Thursday 05 February 2026 00:24:09 +0000 (0:00:00.129) 0:00:12.922 ***** 2026-02-05 00:24:10.306986 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:24:10.306995 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:24:10.307005 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:24:10.307033 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:24:10.307043 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:24:10.307053 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:24:10.307062 | orchestrator | 2026-02-05 00:24:10.307071 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-05 00:24:10.307081 | orchestrator | Thursday 05 February 2026 00:24:09 +0000 (0:00:00.670) 0:00:13.592 ***** 2026-02-05 00:24:10.307090 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:24:10.307100 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:24:10.307109 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:24:10.307119 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:24:10.307129 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:24:10.307138 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:24:10.307148 | orchestrator | 2026-02-05 00:24:10.307157 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:24:10.307168 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 00:24:10.307179 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 00:24:10.307188 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 00:24:10.307198 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 00:24:10.307208 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 00:24:10.307325 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 00:24:10.307339 | orchestrator | 2026-02-05 00:24:10.307356 | orchestrator | 2026-02-05 00:24:10.307373 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:24:10.307390 | orchestrator | Thursday 05 February 2026 00:24:10 +0000 (0:00:00.223) 0:00:13.815 ***** 2026-02-05 00:24:10.307406 | orchestrator | =============================================================================== 2026-02-05 00:24:10.307421 | orchestrator | Gathering Facts --------------------------------------------------------- 4.26s 2026-02-05 00:24:10.307439 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.48s 2026-02-05 00:24:10.307456 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.28s 2026-02-05 00:24:10.307474 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2026-02-05 00:24:10.307491 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2026-02-05 00:24:10.307506 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2026-02-05 00:24:10.307523 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-02-05 00:24:10.307539 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-02-05 00:24:10.307555 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2026-02-05 00:24:10.307571 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2026-02-05 00:24:10.307587 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2026-02-05 00:24:10.307604 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-02-05 00:24:10.307619 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2026-02-05 00:24:10.307636 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-02-05 00:24:10.307652 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-02-05 00:24:10.307667 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-02-05 00:24:10.307701 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2026-02-05 00:24:10.307731 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-02-05 00:24:10.307749 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2026-02-05 00:24:10.572134 | orchestrator | + osism apply --environment custom facts 2026-02-05 00:24:12.478583 | orchestrator | 2026-02-05 00:24:12 | INFO  | Trying to run play facts in environment custom 2026-02-05 00:24:22.580439 | orchestrator | 2026-02-05 00:24:22 | INFO  | Task ceb09c57-ba94-4f10-bf56-0685d33618ae (facts) was prepared for execution. 2026-02-05 00:24:22.580516 | orchestrator | 2026-02-05 00:24:22 | INFO  | It takes a moment until task ceb09c57-ba94-4f10-bf56-0685d33618ae (facts) has been started and output is visible here. 2026-02-05 00:25:06.575624 | orchestrator | 2026-02-05 00:25:06.575736 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-05 00:25:06.575751 | orchestrator | 2026-02-05 00:25:06.575764 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-05 00:25:06.575775 | orchestrator | Thursday 05 February 2026 00:24:26 +0000 (0:00:00.081) 0:00:00.081 ***** 2026-02-05 00:25:06.575786 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:06.575799 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:06.575810 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:06.575821 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:06.575831 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:06.575842 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:06.575853 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:06.575888 | orchestrator | 2026-02-05 00:25:06.575900 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-05 00:25:06.575911 | orchestrator | Thursday 05 February 2026 00:24:27 +0000 (0:00:01.374) 0:00:01.456 ***** 2026-02-05 00:25:06.575922 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:06.575932 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:06.575943 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:06.575953 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:06.575964 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:06.575974 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:06.575985 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:06.575995 | orchestrator | 2026-02-05 00:25:06.576006 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-05 00:25:06.576016 | orchestrator | 2026-02-05 00:25:06.576027 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-05 00:25:06.576038 | orchestrator | Thursday 05 February 2026 00:24:29 +0000 (0:00:01.188) 0:00:02.644 ***** 2026-02-05 00:25:06.576048 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:06.576059 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:06.576070 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:06.576080 | orchestrator | 2026-02-05 00:25:06.576091 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-05 00:25:06.576103 | orchestrator | Thursday 05 February 2026 00:24:29 +0000 (0:00:00.091) 0:00:02.735 ***** 2026-02-05 00:25:06.576113 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:06.576124 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:06.576134 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:06.576145 | orchestrator | 2026-02-05 00:25:06.576156 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-05 00:25:06.576170 | orchestrator | Thursday 05 February 2026 00:24:29 +0000 (0:00:00.187) 0:00:02.923 ***** 2026-02-05 00:25:06.576182 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:06.576194 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:06.576206 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:06.576219 | orchestrator | 2026-02-05 00:25:06.576231 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-05 00:25:06.576243 | orchestrator | Thursday 05 February 2026 00:24:29 +0000 (0:00:00.192) 0:00:03.115 ***** 2026-02-05 00:25:06.576257 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:25:06.576271 | orchestrator | 2026-02-05 00:25:06.576284 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-05 00:25:06.576296 | orchestrator | Thursday 05 February 2026 00:24:29 +0000 (0:00:00.120) 0:00:03.236 ***** 2026-02-05 00:25:06.576309 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:06.576384 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:06.576397 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:06.576409 | orchestrator | 2026-02-05 00:25:06.576423 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-05 00:25:06.576435 | orchestrator | Thursday 05 February 2026 00:24:30 +0000 (0:00:00.423) 0:00:03.660 ***** 2026-02-05 00:25:06.576448 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:06.576460 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:06.576474 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:06.576486 | orchestrator | 2026-02-05 00:25:06.576547 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-05 00:25:06.576559 | orchestrator | Thursday 05 February 2026 00:24:30 +0000 (0:00:00.114) 0:00:03.775 ***** 2026-02-05 00:25:06.576570 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:06.576581 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:06.576592 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:06.576603 | orchestrator | 2026-02-05 00:25:06.576613 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-05 00:25:06.576633 | orchestrator | Thursday 05 February 2026 00:24:31 +0000 (0:00:01.062) 0:00:04.837 ***** 2026-02-05 00:25:06.576644 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:06.576654 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:06.576727 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:06.576742 | orchestrator | 2026-02-05 00:25:06.576753 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-05 00:25:06.576764 | orchestrator | Thursday 05 February 2026 00:24:31 +0000 (0:00:00.471) 0:00:05.309 ***** 2026-02-05 00:25:06.576775 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:06.576786 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:06.576797 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:06.576808 | orchestrator | 2026-02-05 00:25:06.576819 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-05 00:25:06.576830 | orchestrator | Thursday 05 February 2026 00:24:32 +0000 (0:00:01.098) 0:00:06.408 ***** 2026-02-05 00:25:06.576840 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:06.576851 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:06.576862 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:06.576873 | orchestrator | 2026-02-05 00:25:06.576883 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-05 00:25:06.576894 | orchestrator | Thursday 05 February 2026 00:24:49 +0000 (0:00:16.571) 0:00:22.979 ***** 2026-02-05 00:25:06.576905 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:06.576916 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:06.576927 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:06.576938 | orchestrator | 2026-02-05 00:25:06.576949 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-05 00:25:06.576979 | orchestrator | Thursday 05 February 2026 00:24:49 +0000 (0:00:00.077) 0:00:23.057 ***** 2026-02-05 00:25:06.576990 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:06.577001 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:06.577012 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:06.577023 | orchestrator | 2026-02-05 00:25:06.577033 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-05 00:25:06.577050 | orchestrator | Thursday 05 February 2026 00:24:57 +0000 (0:00:08.093) 0:00:31.150 ***** 2026-02-05 00:25:06.577061 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:06.577072 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:06.577083 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:06.577094 | orchestrator | 2026-02-05 00:25:06.577104 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-05 00:25:06.577115 | orchestrator | Thursday 05 February 2026 00:24:58 +0000 (0:00:00.458) 0:00:31.609 ***** 2026-02-05 00:25:06.577126 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-05 00:25:06.577137 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-05 00:25:06.577148 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-05 00:25:06.577158 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-05 00:25:06.577169 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-05 00:25:06.577180 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-05 00:25:06.577190 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-05 00:25:06.577201 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-05 00:25:06.577212 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-05 00:25:06.577222 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-05 00:25:06.577233 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-05 00:25:06.577244 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-05 00:25:06.577254 | orchestrator | 2026-02-05 00:25:06.577265 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-05 00:25:06.577283 | orchestrator | Thursday 05 February 2026 00:25:01 +0000 (0:00:03.541) 0:00:35.151 ***** 2026-02-05 00:25:06.577294 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:06.577305 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:06.577357 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:06.577369 | orchestrator | 2026-02-05 00:25:06.577380 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 00:25:06.577391 | orchestrator | 2026-02-05 00:25:06.577402 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 00:25:06.577413 | orchestrator | Thursday 05 February 2026 00:25:02 +0000 (0:00:01.297) 0:00:36.448 ***** 2026-02-05 00:25:06.577424 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:06.577435 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:06.577445 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:06.577456 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:06.577467 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:06.577478 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:06.577488 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:06.577499 | orchestrator | 2026-02-05 00:25:06.577510 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:25:06.577522 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:25:06.577533 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:25:06.577546 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:25:06.577557 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:25:06.577568 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:25:06.577579 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:25:06.577590 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:25:06.577601 | orchestrator | 2026-02-05 00:25:06.577612 | orchestrator | 2026-02-05 00:25:06.577623 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:25:06.577634 | orchestrator | Thursday 05 February 2026 00:25:06 +0000 (0:00:03.665) 0:00:40.114 ***** 2026-02-05 00:25:06.577645 | orchestrator | =============================================================================== 2026-02-05 00:25:06.577655 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.57s 2026-02-05 00:25:06.577666 | orchestrator | Install required packages (Debian) -------------------------------------- 8.09s 2026-02-05 00:25:06.577677 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.67s 2026-02-05 00:25:06.577688 | orchestrator | Copy fact files --------------------------------------------------------- 3.54s 2026-02-05 00:25:06.577698 | orchestrator | Create custom facts directory ------------------------------------------- 1.37s 2026-02-05 00:25:06.577709 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.30s 2026-02-05 00:25:06.577740 | orchestrator | Copy fact file ---------------------------------------------------------- 1.19s 2026-02-05 00:25:06.800823 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2026-02-05 00:25:06.800948 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2026-02-05 00:25:06.800984 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-02-05 00:25:06.800996 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-02-05 00:25:06.801827 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2026-02-05 00:25:06.801864 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2026-02-05 00:25:06.801884 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-02-05 00:25:06.801902 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-02-05 00:25:06.801914 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2026-02-05 00:25:06.801924 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-02-05 00:25:06.801935 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-02-05 00:25:07.071628 | orchestrator | + osism apply bootstrap 2026-02-05 00:25:19.019522 | orchestrator | 2026-02-05 00:25:19 | INFO  | Task c46a8df4-9ab6-4e7e-a085-9d0542d93362 (bootstrap) was prepared for execution. 2026-02-05 00:25:19.019635 | orchestrator | 2026-02-05 00:25:19 | INFO  | It takes a moment until task c46a8df4-9ab6-4e7e-a085-9d0542d93362 (bootstrap) has been started and output is visible here. 2026-02-05 00:25:34.313424 | orchestrator | 2026-02-05 00:25:34.313556 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-05 00:25:34.313584 | orchestrator | 2026-02-05 00:25:34.313604 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-05 00:25:34.313624 | orchestrator | Thursday 05 February 2026 00:25:22 +0000 (0:00:00.120) 0:00:00.120 ***** 2026-02-05 00:25:34.313644 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:34.313666 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:34.313686 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:34.313702 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:34.313713 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:34.313724 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:34.313735 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:34.313746 | orchestrator | 2026-02-05 00:25:34.313758 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 00:25:34.313769 | orchestrator | 2026-02-05 00:25:34.313780 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 00:25:34.313791 | orchestrator | Thursday 05 February 2026 00:25:23 +0000 (0:00:00.167) 0:00:00.288 ***** 2026-02-05 00:25:34.313802 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:34.313813 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:34.313823 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:34.313834 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:34.313845 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:34.313856 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:34.313867 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:34.313878 | orchestrator | 2026-02-05 00:25:34.313889 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-05 00:25:34.313899 | orchestrator | 2026-02-05 00:25:34.313910 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 00:25:34.313921 | orchestrator | Thursday 05 February 2026 00:25:26 +0000 (0:00:03.828) 0:00:04.116 ***** 2026-02-05 00:25:34.313933 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-05 00:25:34.313944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-05 00:25:34.313955 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-05 00:25:34.313966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:25:34.313977 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-05 00:25:34.313987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:25:34.313998 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-05 00:25:34.314009 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-05 00:25:34.314082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:25:34.314120 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 00:25:34.314132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 00:25:34.314142 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 00:25:34.314154 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-05 00:25:34.314164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 00:25:34.314175 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-05 00:25:34.314186 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 00:25:34.314198 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 00:25:34.314209 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 00:25:34.314220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 00:25:34.314230 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-05 00:25:34.314241 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 00:25:34.314252 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-05 00:25:34.314263 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 00:25:34.314274 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:34.314285 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-05 00:25:34.314295 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 00:25:34.314306 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 00:25:34.314317 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:34.314328 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-05 00:25:34.314339 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:25:34.314380 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-05 00:25:34.314391 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 00:25:34.314402 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-05 00:25:34.314413 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-05 00:25:34.314423 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 00:25:34.314434 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-05 00:25:34.314445 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-05 00:25:34.314455 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-05 00:25:34.314484 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-05 00:25:34.314495 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 00:25:34.314506 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:34.314517 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 00:25:34.314528 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-05 00:25:34.314538 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-05 00:25:34.314549 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 00:25:34.314560 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-05 00:25:34.314592 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 00:25:34.314604 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 00:25:34.314614 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 00:25:34.314625 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 00:25:34.314636 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:25:34.314647 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 00:25:34.314658 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 00:25:34.314668 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:25:34.314679 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 00:25:34.314702 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:25:34.314713 | orchestrator | 2026-02-05 00:25:34.314724 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-05 00:25:34.314735 | orchestrator | 2026-02-05 00:25:34.314746 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-05 00:25:34.314757 | orchestrator | Thursday 05 February 2026 00:25:27 +0000 (0:00:00.362) 0:00:04.478 ***** 2026-02-05 00:25:34.314767 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:34.314778 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:34.314789 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:34.314800 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:34.314811 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:34.314822 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:34.314833 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:34.314843 | orchestrator | 2026-02-05 00:25:34.314854 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-05 00:25:34.314865 | orchestrator | Thursday 05 February 2026 00:25:28 +0000 (0:00:01.282) 0:00:05.761 ***** 2026-02-05 00:25:34.314876 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:34.314887 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:34.314898 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:34.314909 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:34.314920 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:34.314930 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:34.314941 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:34.314952 | orchestrator | 2026-02-05 00:25:34.314963 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-05 00:25:34.314973 | orchestrator | Thursday 05 February 2026 00:25:29 +0000 (0:00:01.166) 0:00:06.927 ***** 2026-02-05 00:25:34.314986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:25:34.314999 | orchestrator | 2026-02-05 00:25:34.315010 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-05 00:25:34.315021 | orchestrator | Thursday 05 February 2026 00:25:29 +0000 (0:00:00.253) 0:00:07.181 ***** 2026-02-05 00:25:34.315032 | orchestrator | changed: [testbed-manager] 2026-02-05 00:25:34.315043 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:34.315054 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:34.315065 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:34.315076 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:34.315086 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:34.315097 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:34.315108 | orchestrator | 2026-02-05 00:25:34.315119 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-05 00:25:34.315130 | orchestrator | Thursday 05 February 2026 00:25:31 +0000 (0:00:02.021) 0:00:09.203 ***** 2026-02-05 00:25:34.315140 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:25:34.315152 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:25:34.315165 | orchestrator | 2026-02-05 00:25:34.315176 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-05 00:25:34.315187 | orchestrator | Thursday 05 February 2026 00:25:32 +0000 (0:00:00.242) 0:00:09.445 ***** 2026-02-05 00:25:34.315198 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:34.315209 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:34.315220 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:34.315231 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:34.315241 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:34.315252 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:34.315263 | orchestrator | 2026-02-05 00:25:34.315281 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-05 00:25:34.315297 | orchestrator | Thursday 05 February 2026 00:25:33 +0000 (0:00:00.994) 0:00:10.440 ***** 2026-02-05 00:25:34.315308 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:25:34.315319 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:34.315330 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:34.315341 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:34.315402 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:34.315414 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:34.315425 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:34.315436 | orchestrator | 2026-02-05 00:25:34.315447 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-05 00:25:34.315458 | orchestrator | Thursday 05 February 2026 00:25:33 +0000 (0:00:00.596) 0:00:11.036 ***** 2026-02-05 00:25:34.315469 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:34.315480 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:34.315491 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:34.315501 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:25:34.315512 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:25:34.315523 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:25:34.315534 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:34.315545 | orchestrator | 2026-02-05 00:25:34.315555 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-05 00:25:34.315568 | orchestrator | Thursday 05 February 2026 00:25:34 +0000 (0:00:00.413) 0:00:11.449 ***** 2026-02-05 00:25:34.315579 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:25:34.315589 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:34.315607 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:45.721844 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:45.721978 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:25:45.721995 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:25:45.722006 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:25:45.722078 | orchestrator | 2026-02-05 00:25:45.722091 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-05 00:25:45.722103 | orchestrator | Thursday 05 February 2026 00:25:34 +0000 (0:00:00.187) 0:00:11.637 ***** 2026-02-05 00:25:45.722116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:25:45.722146 | orchestrator | 2026-02-05 00:25:45.722157 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-05 00:25:45.722169 | orchestrator | Thursday 05 February 2026 00:25:34 +0000 (0:00:00.260) 0:00:11.897 ***** 2026-02-05 00:25:45.722180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:25:45.722192 | orchestrator | 2026-02-05 00:25:45.722203 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-05 00:25:45.722214 | orchestrator | Thursday 05 February 2026 00:25:34 +0000 (0:00:00.334) 0:00:12.232 ***** 2026-02-05 00:25:45.722224 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:45.722236 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:45.722247 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:45.722258 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:45.722268 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:45.722280 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:45.722291 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:45.722301 | orchestrator | 2026-02-05 00:25:45.722312 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-05 00:25:45.722323 | orchestrator | Thursday 05 February 2026 00:25:36 +0000 (0:00:01.434) 0:00:13.666 ***** 2026-02-05 00:25:45.722358 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:25:45.722471 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:45.722485 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:45.722499 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:45.722511 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:25:45.722524 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:25:45.722537 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:25:45.722550 | orchestrator | 2026-02-05 00:25:45.722562 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-05 00:25:45.722575 | orchestrator | Thursday 05 February 2026 00:25:36 +0000 (0:00:00.283) 0:00:13.949 ***** 2026-02-05 00:25:45.722588 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:45.722600 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:45.722612 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:45.722625 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:45.722637 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:45.722650 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:45.722663 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:45.722675 | orchestrator | 2026-02-05 00:25:45.722688 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-05 00:25:45.722701 | orchestrator | Thursday 05 February 2026 00:25:37 +0000 (0:00:00.519) 0:00:14.469 ***** 2026-02-05 00:25:45.722714 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:25:45.722724 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:45.722735 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:45.722746 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:45.722756 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:25:45.722767 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:25:45.722777 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:25:45.722788 | orchestrator | 2026-02-05 00:25:45.722800 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-05 00:25:45.722812 | orchestrator | Thursday 05 February 2026 00:25:37 +0000 (0:00:00.226) 0:00:14.696 ***** 2026-02-05 00:25:45.722822 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:45.722833 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:45.722843 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:45.722854 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:45.722864 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:45.722875 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:45.722885 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:45.722896 | orchestrator | 2026-02-05 00:25:45.722907 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-05 00:25:45.722918 | orchestrator | Thursday 05 February 2026 00:25:37 +0000 (0:00:00.545) 0:00:15.242 ***** 2026-02-05 00:25:45.722928 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:45.722939 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:45.722950 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:45.722960 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:45.722971 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:45.722982 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:45.722992 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:45.723003 | orchestrator | 2026-02-05 00:25:45.723013 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-05 00:25:45.723024 | orchestrator | Thursday 05 February 2026 00:25:39 +0000 (0:00:01.077) 0:00:16.320 ***** 2026-02-05 00:25:45.723035 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:45.723045 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:45.723056 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:45.723067 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:45.723077 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:45.723088 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:45.723098 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:45.723109 | orchestrator | 2026-02-05 00:25:45.723120 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-05 00:25:45.723139 | orchestrator | Thursday 05 February 2026 00:25:40 +0000 (0:00:01.139) 0:00:17.459 ***** 2026-02-05 00:25:45.723172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:25:45.723184 | orchestrator | 2026-02-05 00:25:45.723195 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-05 00:25:45.723205 | orchestrator | Thursday 05 February 2026 00:25:40 +0000 (0:00:00.245) 0:00:17.705 ***** 2026-02-05 00:25:45.723216 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:25:45.723227 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:45.723237 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:45.723248 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:45.723258 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:45.723269 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:45.723280 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:45.723290 | orchestrator | 2026-02-05 00:25:45.723301 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-05 00:25:45.723312 | orchestrator | Thursday 05 February 2026 00:25:41 +0000 (0:00:01.228) 0:00:18.934 ***** 2026-02-05 00:25:45.723322 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:45.723333 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:45.723344 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:45.723354 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:45.723383 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:45.723394 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:45.723405 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:45.723416 | orchestrator | 2026-02-05 00:25:45.723427 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-05 00:25:45.723437 | orchestrator | Thursday 05 February 2026 00:25:41 +0000 (0:00:00.184) 0:00:19.119 ***** 2026-02-05 00:25:45.723448 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:45.723459 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:45.723469 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:45.723480 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:45.723490 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:45.723501 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:45.723511 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:45.723522 | orchestrator | 2026-02-05 00:25:45.723533 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-05 00:25:45.723544 | orchestrator | Thursday 05 February 2026 00:25:42 +0000 (0:00:00.169) 0:00:19.288 ***** 2026-02-05 00:25:45.723554 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:45.723565 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:45.723575 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:45.723586 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:45.723596 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:45.723607 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:45.723617 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:45.723628 | orchestrator | 2026-02-05 00:25:45.723638 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-05 00:25:45.723649 | orchestrator | Thursday 05 February 2026 00:25:42 +0000 (0:00:00.177) 0:00:19.466 ***** 2026-02-05 00:25:45.723660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:25:45.723673 | orchestrator | 2026-02-05 00:25:45.723684 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-05 00:25:45.723694 | orchestrator | Thursday 05 February 2026 00:25:42 +0000 (0:00:00.235) 0:00:19.701 ***** 2026-02-05 00:25:45.723705 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:45.723716 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:45.723733 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:45.723744 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:45.723754 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:45.723765 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:45.723776 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:45.723786 | orchestrator | 2026-02-05 00:25:45.723797 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-05 00:25:45.723808 | orchestrator | Thursday 05 February 2026 00:25:42 +0000 (0:00:00.488) 0:00:20.190 ***** 2026-02-05 00:25:45.723819 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:25:45.723829 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:45.723840 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:45.723851 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:45.723861 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:25:45.723883 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:25:45.723894 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:25:45.723905 | orchestrator | 2026-02-05 00:25:45.723920 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-05 00:25:45.723931 | orchestrator | Thursday 05 February 2026 00:25:43 +0000 (0:00:00.190) 0:00:20.380 ***** 2026-02-05 00:25:45.723942 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:45.723953 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:45.723964 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:45.723974 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:45.723985 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:45.723995 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:45.724006 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:45.724016 | orchestrator | 2026-02-05 00:25:45.724027 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-05 00:25:45.724038 | orchestrator | Thursday 05 February 2026 00:25:44 +0000 (0:00:00.995) 0:00:21.375 ***** 2026-02-05 00:25:45.724049 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:45.724059 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:45.724070 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:45.724080 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:45.724091 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:45.724102 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:45.724112 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:45.724123 | orchestrator | 2026-02-05 00:25:45.724134 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-05 00:25:45.724145 | orchestrator | Thursday 05 February 2026 00:25:44 +0000 (0:00:00.504) 0:00:21.880 ***** 2026-02-05 00:25:45.724155 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:45.724166 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:45.724177 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:45.724187 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:45.724205 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:26:25.290357 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:26:25.290498 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:26:25.290510 | orchestrator | 2026-02-05 00:26:25.290518 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-05 00:26:25.290540 | orchestrator | Thursday 05 February 2026 00:25:45 +0000 (0:00:01.085) 0:00:22.966 ***** 2026-02-05 00:26:25.290550 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:25.290560 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:25.290569 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:25.290578 | orchestrator | changed: [testbed-manager] 2026-02-05 00:26:25.290586 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:26:25.290597 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:26:25.290606 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:26:25.290615 | orchestrator | 2026-02-05 00:26:25.290624 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-05 00:26:25.290633 | orchestrator | Thursday 05 February 2026 00:26:01 +0000 (0:00:16.209) 0:00:39.176 ***** 2026-02-05 00:26:25.290642 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:25.290676 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:25.290685 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:25.290694 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:25.290702 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:25.290710 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:25.290719 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:25.290728 | orchestrator | 2026-02-05 00:26:25.290738 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-05 00:26:25.290748 | orchestrator | Thursday 05 February 2026 00:26:02 +0000 (0:00:00.191) 0:00:39.367 ***** 2026-02-05 00:26:25.290756 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:25.290765 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:25.290774 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:25.290783 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:25.290792 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:25.290802 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:25.290812 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:25.290821 | orchestrator | 2026-02-05 00:26:25.290831 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-05 00:26:25.290840 | orchestrator | Thursday 05 February 2026 00:26:02 +0000 (0:00:00.194) 0:00:39.562 ***** 2026-02-05 00:26:25.290850 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:25.290860 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:25.290870 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:25.290880 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:25.290890 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:25.290900 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:25.290910 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:25.290919 | orchestrator | 2026-02-05 00:26:25.290926 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-05 00:26:25.290933 | orchestrator | Thursday 05 February 2026 00:26:02 +0000 (0:00:00.214) 0:00:39.777 ***** 2026-02-05 00:26:25.290942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:26:25.290951 | orchestrator | 2026-02-05 00:26:25.290958 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-05 00:26:25.290964 | orchestrator | Thursday 05 February 2026 00:26:02 +0000 (0:00:00.311) 0:00:40.088 ***** 2026-02-05 00:26:25.290971 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:25.290978 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:25.290984 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:25.290991 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:25.290997 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:25.291004 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:25.291011 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:25.291017 | orchestrator | 2026-02-05 00:26:25.291024 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-05 00:26:25.291030 | orchestrator | Thursday 05 February 2026 00:26:04 +0000 (0:00:01.739) 0:00:41.827 ***** 2026-02-05 00:26:25.291037 | orchestrator | changed: [testbed-manager] 2026-02-05 00:26:25.291044 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:26:25.291051 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:26:25.291058 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:26:25.291064 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:26:25.291071 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:26:25.291077 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:26:25.291084 | orchestrator | 2026-02-05 00:26:25.291090 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-05 00:26:25.291097 | orchestrator | Thursday 05 February 2026 00:26:05 +0000 (0:00:01.116) 0:00:42.943 ***** 2026-02-05 00:26:25.291143 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:25.291151 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:25.291158 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:25.291165 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:25.291180 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:25.291185 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:25.291191 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:25.291197 | orchestrator | 2026-02-05 00:26:25.291202 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-05 00:26:25.291208 | orchestrator | Thursday 05 February 2026 00:26:06 +0000 (0:00:00.934) 0:00:43.878 ***** 2026-02-05 00:26:25.291215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:26:25.291223 | orchestrator | 2026-02-05 00:26:25.291228 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-05 00:26:25.291235 | orchestrator | Thursday 05 February 2026 00:26:06 +0000 (0:00:00.275) 0:00:44.154 ***** 2026-02-05 00:26:25.291241 | orchestrator | changed: [testbed-manager] 2026-02-05 00:26:25.291251 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:26:25.291261 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:26:25.291286 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:26:25.291297 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:26:25.291306 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:26:25.291315 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:26:25.291325 | orchestrator | 2026-02-05 00:26:25.291355 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-05 00:26:25.291366 | orchestrator | Thursday 05 February 2026 00:26:07 +0000 (0:00:01.056) 0:00:45.210 ***** 2026-02-05 00:26:25.291377 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:26:25.291387 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:26:25.291397 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:26:25.291403 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:26:25.291431 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:26:25.291438 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:26:25.291444 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:26:25.291449 | orchestrator | 2026-02-05 00:26:25.291455 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-05 00:26:25.291461 | orchestrator | Thursday 05 February 2026 00:26:08 +0000 (0:00:00.217) 0:00:45.427 ***** 2026-02-05 00:26:25.291467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:26:25.291473 | orchestrator | 2026-02-05 00:26:25.291479 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-05 00:26:25.291484 | orchestrator | Thursday 05 February 2026 00:26:08 +0000 (0:00:00.273) 0:00:45.701 ***** 2026-02-05 00:26:25.291490 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:25.291496 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:25.291501 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:25.291507 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:25.291513 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:25.291518 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:25.291524 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:25.291530 | orchestrator | 2026-02-05 00:26:25.291535 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-05 00:26:25.291541 | orchestrator | Thursday 05 February 2026 00:26:10 +0000 (0:00:01.667) 0:00:47.369 ***** 2026-02-05 00:26:25.291547 | orchestrator | changed: [testbed-manager] 2026-02-05 00:26:25.291553 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:26:25.291558 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:26:25.291564 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:26:25.291570 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:26:25.291575 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:26:25.291581 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:26:25.291587 | orchestrator | 2026-02-05 00:26:25.291599 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-05 00:26:25.291605 | orchestrator | Thursday 05 February 2026 00:26:11 +0000 (0:00:01.144) 0:00:48.514 ***** 2026-02-05 00:26:25.291611 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:26:25.291617 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:26:25.291622 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:26:25.291628 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:26:25.291634 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:26:25.291639 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:26:25.291645 | orchestrator | changed: [testbed-manager] 2026-02-05 00:26:25.291651 | orchestrator | 2026-02-05 00:26:25.291657 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-05 00:26:25.291662 | orchestrator | Thursday 05 February 2026 00:26:21 +0000 (0:00:10.727) 0:00:59.241 ***** 2026-02-05 00:26:25.291668 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:25.291674 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:25.291680 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:25.291685 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:25.291691 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:25.291697 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:25.291703 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:25.291708 | orchestrator | 2026-02-05 00:26:25.291714 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-05 00:26:25.291720 | orchestrator | Thursday 05 February 2026 00:26:23 +0000 (0:00:01.689) 0:01:00.931 ***** 2026-02-05 00:26:25.291726 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:25.291732 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:25.291737 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:25.291743 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:25.291749 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:25.291754 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:25.291760 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:25.291766 | orchestrator | 2026-02-05 00:26:25.291772 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-05 00:26:25.291777 | orchestrator | Thursday 05 February 2026 00:26:24 +0000 (0:00:00.889) 0:01:01.820 ***** 2026-02-05 00:26:25.291783 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:25.291794 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:25.291800 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:25.291806 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:25.291811 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:25.291817 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:25.291823 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:25.291828 | orchestrator | 2026-02-05 00:26:25.291834 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-05 00:26:25.291840 | orchestrator | Thursday 05 February 2026 00:26:24 +0000 (0:00:00.206) 0:01:02.027 ***** 2026-02-05 00:26:25.291846 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:25.291852 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:25.291857 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:25.291863 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:25.291869 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:25.291874 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:25.291880 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:25.291886 | orchestrator | 2026-02-05 00:26:25.291891 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-05 00:26:25.291897 | orchestrator | Thursday 05 February 2026 00:26:24 +0000 (0:00:00.224) 0:01:02.252 ***** 2026-02-05 00:26:25.291903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:26:25.291910 | orchestrator | 2026-02-05 00:26:25.291920 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-05 00:28:48.435351 | orchestrator | Thursday 05 February 2026 00:26:25 +0000 (0:00:00.287) 0:01:02.539 ***** 2026-02-05 00:28:48.435477 | orchestrator | ok: [testbed-manager] 2026-02-05 00:28:48.435490 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:28:48.435498 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:28:48.435505 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:28:48.435512 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:28:48.435519 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:28:48.435526 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:28:48.435533 | orchestrator | 2026-02-05 00:28:48.435541 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-05 00:28:48.435548 | orchestrator | Thursday 05 February 2026 00:26:27 +0000 (0:00:01.800) 0:01:04.340 ***** 2026-02-05 00:28:48.435555 | orchestrator | changed: [testbed-manager] 2026-02-05 00:28:48.435605 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:28:48.435612 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:28:48.435618 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:28:48.435625 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:28:48.435632 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:28:48.435639 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:28:48.435646 | orchestrator | 2026-02-05 00:28:48.435653 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-05 00:28:48.435661 | orchestrator | Thursday 05 February 2026 00:26:27 +0000 (0:00:00.583) 0:01:04.924 ***** 2026-02-05 00:28:48.435668 | orchestrator | ok: [testbed-manager] 2026-02-05 00:28:48.435674 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:28:48.435680 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:28:48.435687 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:28:48.435693 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:28:48.435699 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:28:48.435706 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:28:48.435713 | orchestrator | 2026-02-05 00:28:48.435719 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-05 00:28:48.435726 | orchestrator | Thursday 05 February 2026 00:26:27 +0000 (0:00:00.205) 0:01:05.129 ***** 2026-02-05 00:28:48.435733 | orchestrator | ok: [testbed-manager] 2026-02-05 00:28:48.435740 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:28:48.435746 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:28:48.435752 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:28:48.435759 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:28:48.435765 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:28:48.435772 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:28:48.435778 | orchestrator | 2026-02-05 00:28:48.435785 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-05 00:28:48.435792 | orchestrator | Thursday 05 February 2026 00:26:29 +0000 (0:00:01.311) 0:01:06.440 ***** 2026-02-05 00:28:48.435798 | orchestrator | changed: [testbed-manager] 2026-02-05 00:28:48.435804 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:28:48.435810 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:28:48.435816 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:28:48.435822 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:28:48.435829 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:28:48.435836 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:28:48.435842 | orchestrator | 2026-02-05 00:28:48.435849 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-05 00:28:48.435858 | orchestrator | Thursday 05 February 2026 00:26:31 +0000 (0:00:01.824) 0:01:08.265 ***** 2026-02-05 00:28:48.435865 | orchestrator | ok: [testbed-manager] 2026-02-05 00:28:48.435872 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:28:48.435879 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:28:48.435886 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:28:48.435893 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:28:48.435900 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:28:48.435906 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:28:48.435913 | orchestrator | 2026-02-05 00:28:48.435920 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-05 00:28:48.435948 | orchestrator | Thursday 05 February 2026 00:26:33 +0000 (0:00:02.644) 0:01:10.909 ***** 2026-02-05 00:28:48.435956 | orchestrator | ok: [testbed-manager] 2026-02-05 00:28:48.435962 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:28:48.435977 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:28:48.435984 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:28:48.435990 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:28:48.435997 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:28:48.436004 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:28:48.436010 | orchestrator | 2026-02-05 00:28:48.436017 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-05 00:28:48.436023 | orchestrator | Thursday 05 February 2026 00:27:15 +0000 (0:00:41.545) 0:01:52.454 ***** 2026-02-05 00:28:48.436030 | orchestrator | changed: [testbed-manager] 2026-02-05 00:28:48.436038 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:28:48.436044 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:28:48.436051 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:28:48.436058 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:28:48.436065 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:28:48.436071 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:28:48.436078 | orchestrator | 2026-02-05 00:28:48.436085 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-05 00:28:48.436091 | orchestrator | Thursday 05 February 2026 00:28:40 +0000 (0:01:25.186) 0:03:17.641 ***** 2026-02-05 00:28:48.436098 | orchestrator | ok: [testbed-manager] 2026-02-05 00:28:48.436105 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:28:48.436111 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:28:48.436118 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:28:48.436125 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:28:48.436132 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:28:48.436138 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:28:48.436145 | orchestrator | 2026-02-05 00:28:48.436151 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-05 00:28:48.436158 | orchestrator | Thursday 05 February 2026 00:28:42 +0000 (0:00:01.977) 0:03:19.618 ***** 2026-02-05 00:28:48.436165 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:28:48.436187 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:28:48.436194 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:28:48.436201 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:28:48.436207 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:28:48.436214 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:28:48.436221 | orchestrator | changed: [testbed-manager] 2026-02-05 00:28:48.436228 | orchestrator | 2026-02-05 00:28:48.436234 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-05 00:28:48.436241 | orchestrator | Thursday 05 February 2026 00:28:47 +0000 (0:00:04.870) 0:03:24.489 ***** 2026-02-05 00:28:48.436269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-05 00:28:48.436282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-05 00:28:48.436292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-05 00:28:48.436306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-05 00:28:48.436313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-05 00:28:48.436319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-05 00:28:48.436326 | orchestrator | 2026-02-05 00:28:48.436332 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-05 00:28:48.436339 | orchestrator | Thursday 05 February 2026 00:28:47 +0000 (0:00:00.357) 0:03:24.846 ***** 2026-02-05 00:28:48.436346 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-05 00:28:48.436352 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-05 00:28:48.436359 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:28:48.436365 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:28:48.436372 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-05 00:28:48.436379 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:28:48.436388 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-05 00:28:48.436395 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:28:48.436402 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 00:28:48.436408 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 00:28:48.436414 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 00:28:48.436421 | orchestrator | 2026-02-05 00:28:48.436427 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-05 00:28:48.436434 | orchestrator | Thursday 05 February 2026 00:28:48 +0000 (0:00:00.737) 0:03:25.584 ***** 2026-02-05 00:28:48.436440 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-05 00:28:48.436448 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-05 00:28:48.436454 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-05 00:28:48.436460 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-05 00:28:48.436466 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-05 00:28:48.436477 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-05 00:28:58.307188 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-05 00:28:58.307315 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-05 00:28:58.307335 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-05 00:28:58.307371 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-05 00:28:58.307383 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-05 00:28:58.307394 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-05 00:28:58.307405 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-05 00:28:58.307417 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:28:58.307429 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-05 00:28:58.307439 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-05 00:28:58.307450 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-05 00:28:58.307461 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-05 00:28:58.307471 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-05 00:28:58.307482 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-05 00:28:58.307493 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-05 00:28:58.307503 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-05 00:28:58.307514 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-05 00:28:58.307525 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-05 00:28:58.307535 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-05 00:28:58.307546 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-05 00:28:58.307557 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-05 00:28:58.307596 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-05 00:28:58.307611 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-05 00:28:58.307622 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-05 00:28:58.307633 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-05 00:28:58.307643 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-05 00:28:58.307654 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-05 00:28:58.307665 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-05 00:28:58.307676 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:28:58.307686 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-05 00:28:58.307697 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-05 00:28:58.307723 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-05 00:28:58.307736 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-05 00:28:58.307748 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-05 00:28:58.307760 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:28:58.307774 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-05 00:28:58.307795 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-05 00:28:58.307809 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:28:58.307821 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-05 00:28:58.307834 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-05 00:28:58.307846 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-05 00:28:58.307859 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-05 00:28:58.307871 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-05 00:28:58.307901 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-05 00:28:58.307914 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-05 00:28:58.307927 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-05 00:28:58.307945 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-05 00:28:58.307963 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-05 00:28:58.307981 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-05 00:28:58.307998 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-05 00:28:58.308016 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-05 00:28:58.308039 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-05 00:28:58.308058 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-05 00:28:58.308077 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-05 00:28:58.308094 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-05 00:28:58.308112 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-05 00:28:58.308131 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-05 00:28:58.308149 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-05 00:28:58.308167 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-05 00:28:58.308186 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-05 00:28:58.308204 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-05 00:28:58.308220 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-05 00:28:58.308238 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-05 00:28:58.308257 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-05 00:28:58.308276 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-05 00:28:58.308293 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-05 00:28:58.308311 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-05 00:28:58.308322 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-05 00:28:58.308334 | orchestrator | 2026-02-05 00:28:58.308345 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-05 00:28:58.308385 | orchestrator | Thursday 05 February 2026 00:28:55 +0000 (0:00:06.879) 0:03:32.463 ***** 2026-02-05 00:28:58.308412 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:28:58.308431 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:28:58.308450 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:28:58.308468 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:28:58.308481 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:28:58.308492 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:28:58.308503 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:28:58.308514 | orchestrator | 2026-02-05 00:28:58.308524 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-05 00:28:58.308535 | orchestrator | Thursday 05 February 2026 00:28:56 +0000 (0:00:01.578) 0:03:34.041 ***** 2026-02-05 00:28:58.308546 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:28:58.308557 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:28:58.308697 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:28:58.308712 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:28:58.308724 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:28:58.308743 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:28:58.308762 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:28:58.308781 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:28:58.308800 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 00:28:58.308811 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 00:28:58.308834 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 00:29:12.658870 | orchestrator | 2026-02-05 00:29:12.659000 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-05 00:29:12.659017 | orchestrator | Thursday 05 February 2026 00:28:58 +0000 (0:00:01.514) 0:03:35.555 ***** 2026-02-05 00:29:12.659028 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:29:12.659041 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:29:12.659051 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:29:12.659059 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:29:12.659065 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:29:12.659072 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:29:12.659079 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:29:12.659085 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:29:12.659091 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 00:29:12.659098 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 00:29:12.659104 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 00:29:12.659110 | orchestrator | 2026-02-05 00:29:12.659116 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-05 00:29:12.659143 | orchestrator | Thursday 05 February 2026 00:28:58 +0000 (0:00:00.577) 0:03:36.133 ***** 2026-02-05 00:29:12.659150 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-05 00:29:12.659156 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:29:12.659162 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-05 00:29:12.659171 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-05 00:29:12.659181 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:29:12.659193 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:29:12.659208 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-05 00:29:12.659217 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:29:12.659227 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-05 00:29:12.659255 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-05 00:29:12.659266 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-05 00:29:12.659276 | orchestrator | 2026-02-05 00:29:12.659285 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-05 00:29:12.659292 | orchestrator | Thursday 05 February 2026 00:29:00 +0000 (0:00:01.588) 0:03:37.722 ***** 2026-02-05 00:29:12.659298 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:29:12.659304 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:29:12.659310 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:29:12.659317 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:29:12.659323 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:29:12.659329 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:29:12.659335 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:29:12.659341 | orchestrator | 2026-02-05 00:29:12.659348 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-05 00:29:12.659354 | orchestrator | Thursday 05 February 2026 00:29:00 +0000 (0:00:00.287) 0:03:38.009 ***** 2026-02-05 00:29:12.659360 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:29:12.659367 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:29:12.659373 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:29:12.659379 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:29:12.659390 | orchestrator | ok: [testbed-manager] 2026-02-05 00:29:12.659396 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:29:12.659402 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:29:12.659408 | orchestrator | 2026-02-05 00:29:12.659414 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-05 00:29:12.659420 | orchestrator | Thursday 05 February 2026 00:29:06 +0000 (0:00:05.708) 0:03:43.717 ***** 2026-02-05 00:29:12.659426 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-05 00:29:12.659433 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:29:12.659439 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-05 00:29:12.659445 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:29:12.659451 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-05 00:29:12.659457 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-05 00:29:12.659463 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:29:12.659469 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-05 00:29:12.659475 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:29:12.659482 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:29:12.659488 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-05 00:29:12.659494 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:29:12.659500 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-05 00:29:12.659506 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:29:12.659512 | orchestrator | 2026-02-05 00:29:12.659518 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-05 00:29:12.659595 | orchestrator | Thursday 05 February 2026 00:29:06 +0000 (0:00:00.299) 0:03:44.016 ***** 2026-02-05 00:29:12.659605 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-05 00:29:12.659611 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-05 00:29:12.659618 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-05 00:29:12.659641 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-05 00:29:12.659648 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-05 00:29:12.659654 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-05 00:29:12.659660 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-05 00:29:12.659666 | orchestrator | 2026-02-05 00:29:12.659672 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-05 00:29:12.659678 | orchestrator | Thursday 05 February 2026 00:29:08 +0000 (0:00:01.248) 0:03:45.265 ***** 2026-02-05 00:29:12.659686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:29:12.659695 | orchestrator | 2026-02-05 00:29:12.659701 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-05 00:29:12.659707 | orchestrator | Thursday 05 February 2026 00:29:08 +0000 (0:00:00.379) 0:03:45.644 ***** 2026-02-05 00:29:12.659713 | orchestrator | ok: [testbed-manager] 2026-02-05 00:29:12.659720 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:29:12.659726 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:29:12.659732 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:29:12.659738 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:29:12.659744 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:29:12.659750 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:29:12.659756 | orchestrator | 2026-02-05 00:29:12.659762 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-05 00:29:12.659768 | orchestrator | Thursday 05 February 2026 00:29:09 +0000 (0:00:01.316) 0:03:46.961 ***** 2026-02-05 00:29:12.659774 | orchestrator | ok: [testbed-manager] 2026-02-05 00:29:12.659780 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:29:12.659787 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:29:12.659793 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:29:12.659799 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:29:12.659804 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:29:12.659810 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:29:12.659816 | orchestrator | 2026-02-05 00:29:12.659823 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-05 00:29:12.659893 | orchestrator | Thursday 05 February 2026 00:29:10 +0000 (0:00:00.612) 0:03:47.574 ***** 2026-02-05 00:29:12.659900 | orchestrator | changed: [testbed-manager] 2026-02-05 00:29:12.659906 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:29:12.659913 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:29:12.659919 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:29:12.659925 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:29:12.659931 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:29:12.659940 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:29:12.659951 | orchestrator | 2026-02-05 00:29:12.659961 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-05 00:29:12.659972 | orchestrator | Thursday 05 February 2026 00:29:10 +0000 (0:00:00.642) 0:03:48.216 ***** 2026-02-05 00:29:12.659983 | orchestrator | ok: [testbed-manager] 2026-02-05 00:29:12.659994 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:29:12.660004 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:29:12.660015 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:29:12.660024 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:29:12.660030 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:29:12.660036 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:29:12.660042 | orchestrator | 2026-02-05 00:29:12.660048 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-05 00:29:12.660062 | orchestrator | Thursday 05 February 2026 00:29:11 +0000 (0:00:00.616) 0:03:48.833 ***** 2026-02-05 00:29:12.660075 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249879.5854242, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:12.660086 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249931.356027, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:12.660097 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249920.8494864, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:12.660128 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249932.9572344, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:17.613007 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249896.309637, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:17.613108 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249892.665953, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:17.613125 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249898.6748004, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:17.613165 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:17.613192 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:17.613204 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:17.613216 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:17.613254 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:17.613267 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:17.613278 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:29:17.613299 | orchestrator | 2026-02-05 00:29:17.613313 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-05 00:29:17.613326 | orchestrator | Thursday 05 February 2026 00:29:12 +0000 (0:00:01.067) 0:03:49.901 ***** 2026-02-05 00:29:17.613337 | orchestrator | changed: [testbed-manager] 2026-02-05 00:29:17.613350 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:29:17.613360 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:29:17.613371 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:29:17.613382 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:29:17.613393 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:29:17.613404 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:29:17.613415 | orchestrator | 2026-02-05 00:29:17.613426 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-05 00:29:17.613437 | orchestrator | Thursday 05 February 2026 00:29:13 +0000 (0:00:01.141) 0:03:51.043 ***** 2026-02-05 00:29:17.613448 | orchestrator | changed: [testbed-manager] 2026-02-05 00:29:17.613458 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:29:17.613469 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:29:17.613480 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:29:17.613491 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:29:17.613502 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:29:17.613513 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:29:17.613526 | orchestrator | 2026-02-05 00:29:17.613538 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-05 00:29:17.613551 | orchestrator | Thursday 05 February 2026 00:29:14 +0000 (0:00:01.192) 0:03:52.235 ***** 2026-02-05 00:29:17.613564 | orchestrator | changed: [testbed-manager] 2026-02-05 00:29:17.613577 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:29:17.613617 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:29:17.613628 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:29:17.613639 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:29:17.613650 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:29:17.613661 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:29:17.613671 | orchestrator | 2026-02-05 00:29:17.613682 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-05 00:29:17.613694 | orchestrator | Thursday 05 February 2026 00:29:16 +0000 (0:00:01.204) 0:03:53.440 ***** 2026-02-05 00:29:17.613784 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:29:17.613798 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:29:17.613809 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:29:17.613820 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:29:17.613831 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:29:17.613842 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:29:17.613852 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:29:17.613863 | orchestrator | 2026-02-05 00:29:17.613874 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-05 00:29:17.613885 | orchestrator | Thursday 05 February 2026 00:29:16 +0000 (0:00:00.282) 0:03:53.723 ***** 2026-02-05 00:29:17.613896 | orchestrator | ok: [testbed-manager] 2026-02-05 00:29:17.613908 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:29:17.613919 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:29:17.613930 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:29:17.613941 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:29:17.613952 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:29:17.613962 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:29:17.613973 | orchestrator | 2026-02-05 00:29:17.613984 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-05 00:29:17.613995 | orchestrator | Thursday 05 February 2026 00:29:17 +0000 (0:00:00.748) 0:03:54.471 ***** 2026-02-05 00:29:17.614009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:29:17.614093 | orchestrator | 2026-02-05 00:29:17.614106 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-05 00:29:17.614127 | orchestrator | Thursday 05 February 2026 00:29:17 +0000 (0:00:00.392) 0:03:54.864 ***** 2026-02-05 00:30:37.677083 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:37.677162 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:37.677171 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:37.677177 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:37.677183 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:37.677188 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:37.677194 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:37.677199 | orchestrator | 2026-02-05 00:30:37.677206 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-05 00:30:37.677212 | orchestrator | Thursday 05 February 2026 00:29:26 +0000 (0:00:08.675) 0:04:03.540 ***** 2026-02-05 00:30:37.677218 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:37.677223 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:37.677228 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:37.677236 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:37.677245 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:37.677252 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:37.677265 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:37.677275 | orchestrator | 2026-02-05 00:30:37.677283 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-05 00:30:37.677291 | orchestrator | Thursday 05 February 2026 00:29:27 +0000 (0:00:01.330) 0:04:04.870 ***** 2026-02-05 00:30:37.677299 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:37.677308 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:37.677316 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:37.677324 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:37.677333 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:37.677341 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:37.677350 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:37.677359 | orchestrator | 2026-02-05 00:30:37.677364 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-05 00:30:37.677370 | orchestrator | Thursday 05 February 2026 00:29:28 +0000 (0:00:01.097) 0:04:05.968 ***** 2026-02-05 00:30:37.677375 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:37.677381 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:37.677386 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:37.677391 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:37.677397 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:37.677402 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:37.677407 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:37.677412 | orchestrator | 2026-02-05 00:30:37.677418 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-05 00:30:37.677424 | orchestrator | Thursday 05 February 2026 00:29:28 +0000 (0:00:00.271) 0:04:06.240 ***** 2026-02-05 00:30:37.677429 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:37.677434 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:37.677439 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:37.677444 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:37.677464 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:37.677469 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:37.677474 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:37.677479 | orchestrator | 2026-02-05 00:30:37.677484 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-05 00:30:37.677490 | orchestrator | Thursday 05 February 2026 00:29:29 +0000 (0:00:00.298) 0:04:06.538 ***** 2026-02-05 00:30:37.677495 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:37.677500 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:37.677505 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:37.677510 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:37.677534 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:37.677540 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:37.677545 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:37.677550 | orchestrator | 2026-02-05 00:30:37.677558 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-05 00:30:37.677571 | orchestrator | Thursday 05 February 2026 00:29:29 +0000 (0:00:00.310) 0:04:06.849 ***** 2026-02-05 00:30:37.677579 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:37.677587 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:37.677594 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:37.677602 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:37.677609 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:37.677617 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:37.677624 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:37.677633 | orchestrator | 2026-02-05 00:30:37.677641 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-05 00:30:37.677672 | orchestrator | Thursday 05 February 2026 00:29:35 +0000 (0:00:05.906) 0:04:12.756 ***** 2026-02-05 00:30:37.677684 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:30:37.677695 | orchestrator | 2026-02-05 00:30:37.677705 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-05 00:30:37.677715 | orchestrator | Thursday 05 February 2026 00:29:35 +0000 (0:00:00.446) 0:04:13.202 ***** 2026-02-05 00:30:37.677724 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-05 00:30:37.677734 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-05 00:30:37.677743 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-05 00:30:37.677752 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-05 00:30:37.677761 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:37.677770 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-05 00:30:37.677779 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-05 00:30:37.677788 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:37.677797 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-05 00:30:37.677806 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-05 00:30:37.677815 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:37.677824 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-05 00:30:37.677834 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:30:37.677845 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-05 00:30:37.677853 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-05 00:30:37.677862 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-05 00:30:37.677888 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:30:37.677897 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:30:37.677908 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-05 00:30:37.677917 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-05 00:30:37.677925 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:30:37.677933 | orchestrator | 2026-02-05 00:30:37.677942 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-05 00:30:37.677951 | orchestrator | Thursday 05 February 2026 00:29:36 +0000 (0:00:00.355) 0:04:13.558 ***** 2026-02-05 00:30:37.677965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:30:37.677974 | orchestrator | 2026-02-05 00:30:37.677983 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-05 00:30:37.677993 | orchestrator | Thursday 05 February 2026 00:29:36 +0000 (0:00:00.436) 0:04:13.994 ***** 2026-02-05 00:30:37.678069 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-05 00:30:37.678081 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:37.678091 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-05 00:30:37.678101 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-05 00:30:37.678109 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:37.678117 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-05 00:30:37.678125 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:37.678134 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-05 00:30:37.678142 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:30:37.678149 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-05 00:30:37.678157 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:30:37.678165 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:30:37.678172 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-05 00:30:37.678180 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:30:37.678188 | orchestrator | 2026-02-05 00:30:37.678196 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-05 00:30:37.678206 | orchestrator | Thursday 05 February 2026 00:29:37 +0000 (0:00:00.340) 0:04:14.335 ***** 2026-02-05 00:30:37.678215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:30:37.678224 | orchestrator | 2026-02-05 00:30:37.678232 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-05 00:30:37.678240 | orchestrator | Thursday 05 February 2026 00:29:37 +0000 (0:00:00.436) 0:04:14.771 ***** 2026-02-05 00:30:37.678248 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:37.678256 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:37.678264 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:37.678272 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:37.678281 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:37.678295 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:37.678306 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:37.678315 | orchestrator | 2026-02-05 00:30:37.678323 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-05 00:30:37.678332 | orchestrator | Thursday 05 February 2026 00:30:11 +0000 (0:00:34.143) 0:04:48.915 ***** 2026-02-05 00:30:37.678342 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:37.678351 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:37.678360 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:37.678368 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:37.678377 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:37.678386 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:37.678394 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:37.678402 | orchestrator | 2026-02-05 00:30:37.678411 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-05 00:30:37.678420 | orchestrator | Thursday 05 February 2026 00:30:20 +0000 (0:00:08.764) 0:04:57.680 ***** 2026-02-05 00:30:37.678429 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:37.678437 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:37.678445 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:37.678453 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:37.678462 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:37.678470 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:37.678479 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:37.678487 | orchestrator | 2026-02-05 00:30:37.678496 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-05 00:30:37.678505 | orchestrator | Thursday 05 February 2026 00:30:28 +0000 (0:00:08.256) 0:05:05.936 ***** 2026-02-05 00:30:37.678524 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:37.678532 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:37.678541 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:37.678549 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:37.678558 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:37.678566 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:37.678575 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:37.678585 | orchestrator | 2026-02-05 00:30:37.678594 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-05 00:30:37.678604 | orchestrator | Thursday 05 February 2026 00:30:30 +0000 (0:00:01.934) 0:05:07.871 ***** 2026-02-05 00:30:37.678613 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:37.678622 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:37.678630 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:37.678639 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:37.678669 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:37.678678 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:37.678686 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:37.678694 | orchestrator | 2026-02-05 00:30:37.678714 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-05 00:30:49.139610 | orchestrator | Thursday 05 February 2026 00:30:37 +0000 (0:00:07.052) 0:05:14.923 ***** 2026-02-05 00:30:49.139824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:30:49.139857 | orchestrator | 2026-02-05 00:30:49.139878 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-05 00:30:49.139897 | orchestrator | Thursday 05 February 2026 00:30:38 +0000 (0:00:00.403) 0:05:15.327 ***** 2026-02-05 00:30:49.139917 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:49.139936 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:49.139955 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:49.139974 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:49.139992 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:49.140009 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:49.140027 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:49.140047 | orchestrator | 2026-02-05 00:30:49.140067 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-05 00:30:49.140087 | orchestrator | Thursday 05 February 2026 00:30:38 +0000 (0:00:00.780) 0:05:16.107 ***** 2026-02-05 00:30:49.140106 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:49.140129 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:49.140150 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:49.140174 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:49.140195 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:49.140216 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:49.140238 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:49.140259 | orchestrator | 2026-02-05 00:30:49.140281 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-05 00:30:49.140303 | orchestrator | Thursday 05 February 2026 00:30:41 +0000 (0:00:02.201) 0:05:18.309 ***** 2026-02-05 00:30:49.140325 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:49.140346 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:49.140368 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:49.140390 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:49.140413 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:49.140435 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:49.140459 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:49.140481 | orchestrator | 2026-02-05 00:30:49.140503 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-05 00:30:49.140524 | orchestrator | Thursday 05 February 2026 00:30:41 +0000 (0:00:00.733) 0:05:19.042 ***** 2026-02-05 00:30:49.140581 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:49.140601 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:49.140619 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:49.140637 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:30:49.140812 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:30:49.140833 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:30:49.140851 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:30:49.140870 | orchestrator | 2026-02-05 00:30:49.140891 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-05 00:30:49.140913 | orchestrator | Thursday 05 February 2026 00:30:42 +0000 (0:00:00.272) 0:05:19.315 ***** 2026-02-05 00:30:49.140934 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:49.140953 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:49.140971 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:49.140988 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:30:49.141025 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:30:49.141045 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:30:49.141065 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:30:49.141086 | orchestrator | 2026-02-05 00:30:49.141141 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-05 00:30:49.141154 | orchestrator | Thursday 05 February 2026 00:30:42 +0000 (0:00:00.347) 0:05:19.662 ***** 2026-02-05 00:30:49.141165 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:49.141176 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:49.141186 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:49.141197 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:49.141208 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:49.141219 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:49.141230 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:49.141241 | orchestrator | 2026-02-05 00:30:49.141252 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-05 00:30:49.141262 | orchestrator | Thursday 05 February 2026 00:30:42 +0000 (0:00:00.285) 0:05:19.947 ***** 2026-02-05 00:30:49.141273 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:49.141284 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:49.141302 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:49.141319 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:30:49.141337 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:30:49.141355 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:30:49.141372 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:30:49.141389 | orchestrator | 2026-02-05 00:30:49.141407 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-05 00:30:49.141426 | orchestrator | Thursday 05 February 2026 00:30:42 +0000 (0:00:00.259) 0:05:20.206 ***** 2026-02-05 00:30:49.141444 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:49.141464 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:49.141483 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:49.141502 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:49.141515 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:49.141525 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:49.141536 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:49.141547 | orchestrator | 2026-02-05 00:30:49.141558 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-05 00:30:49.141569 | orchestrator | Thursday 05 February 2026 00:30:43 +0000 (0:00:00.273) 0:05:20.480 ***** 2026-02-05 00:30:49.141580 | orchestrator | ok: [testbed-manager] =>  2026-02-05 00:30:49.141591 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:30:49.141601 | orchestrator | ok: [testbed-node-3] =>  2026-02-05 00:30:49.141612 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:30:49.141622 | orchestrator | ok: [testbed-node-4] =>  2026-02-05 00:30:49.141633 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:30:49.141643 | orchestrator | ok: [testbed-node-5] =>  2026-02-05 00:30:49.141685 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:30:49.141728 | orchestrator | ok: [testbed-node-0] =>  2026-02-05 00:30:49.141756 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:30:49.141767 | orchestrator | ok: [testbed-node-1] =>  2026-02-05 00:30:49.141778 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:30:49.141789 | orchestrator | ok: [testbed-node-2] =>  2026-02-05 00:30:49.141799 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:30:49.141810 | orchestrator | 2026-02-05 00:30:49.141821 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-05 00:30:49.141832 | orchestrator | Thursday 05 February 2026 00:30:43 +0000 (0:00:00.247) 0:05:20.727 ***** 2026-02-05 00:30:49.141843 | orchestrator | ok: [testbed-manager] =>  2026-02-05 00:30:49.141853 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:30:49.141864 | orchestrator | ok: [testbed-node-3] =>  2026-02-05 00:30:49.141875 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:30:49.141885 | orchestrator | ok: [testbed-node-4] =>  2026-02-05 00:30:49.141896 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:30:49.141907 | orchestrator | ok: [testbed-node-5] =>  2026-02-05 00:30:49.141918 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:30:49.141928 | orchestrator | ok: [testbed-node-0] =>  2026-02-05 00:30:49.141939 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:30:49.141949 | orchestrator | ok: [testbed-node-1] =>  2026-02-05 00:30:49.141960 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:30:49.141971 | orchestrator | ok: [testbed-node-2] =>  2026-02-05 00:30:49.141982 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:30:49.141993 | orchestrator | 2026-02-05 00:30:49.142004 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-05 00:30:49.142074 | orchestrator | Thursday 05 February 2026 00:30:43 +0000 (0:00:00.269) 0:05:20.997 ***** 2026-02-05 00:30:49.142088 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:49.142099 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:49.142110 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:49.142120 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:30:49.142131 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:30:49.142142 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:30:49.142153 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:30:49.142163 | orchestrator | 2026-02-05 00:30:49.142174 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-05 00:30:49.142185 | orchestrator | Thursday 05 February 2026 00:30:44 +0000 (0:00:00.273) 0:05:21.271 ***** 2026-02-05 00:30:49.142271 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:49.142286 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:49.142297 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:49.142307 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:30:49.142318 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:30:49.142329 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:30:49.142340 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:30:49.142350 | orchestrator | 2026-02-05 00:30:49.142361 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-05 00:30:49.142372 | orchestrator | Thursday 05 February 2026 00:30:44 +0000 (0:00:00.228) 0:05:21.500 ***** 2026-02-05 00:30:49.142386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:30:49.142399 | orchestrator | 2026-02-05 00:30:49.142410 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-05 00:30:49.142434 | orchestrator | Thursday 05 February 2026 00:30:44 +0000 (0:00:00.397) 0:05:21.897 ***** 2026-02-05 00:30:49.142453 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:49.142470 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:49.142488 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:49.142507 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:49.142525 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:49.142540 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:49.142562 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:49.142572 | orchestrator | 2026-02-05 00:30:49.142584 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-05 00:30:49.142594 | orchestrator | Thursday 05 February 2026 00:30:45 +0000 (0:00:01.033) 0:05:22.930 ***** 2026-02-05 00:30:49.142605 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:49.142616 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:49.142626 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:49.142637 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:49.142647 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:49.142734 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:49.142747 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:49.142757 | orchestrator | 2026-02-05 00:30:49.142768 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-05 00:30:49.142780 | orchestrator | Thursday 05 February 2026 00:30:48 +0000 (0:00:03.054) 0:05:25.985 ***** 2026-02-05 00:30:49.142791 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-05 00:30:49.142802 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-05 00:30:49.142813 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-05 00:30:49.142824 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-05 00:30:49.142835 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-05 00:30:49.142845 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-05 00:30:49.142856 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:49.142867 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-05 00:30:49.142877 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-05 00:30:49.142888 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-05 00:30:49.142899 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:49.142909 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-05 00:30:49.142920 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-05 00:30:49.142931 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-05 00:30:49.142941 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:49.142952 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-05 00:30:49.142976 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-05 00:31:54.007419 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-05 00:31:54.007516 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:31:54.007528 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-05 00:31:54.007536 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-05 00:31:54.007544 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-05 00:31:54.007552 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:31:54.007559 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:31:54.007567 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-05 00:31:54.007574 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-05 00:31:54.007581 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-05 00:31:54.007589 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:31:54.007596 | orchestrator | 2026-02-05 00:31:54.007605 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-05 00:31:54.007613 | orchestrator | Thursday 05 February 2026 00:30:49 +0000 (0:00:00.598) 0:05:26.584 ***** 2026-02-05 00:31:54.007621 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:54.007628 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:54.007635 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:54.007643 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:54.007650 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:54.007658 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:54.007665 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:54.007748 | orchestrator | 2026-02-05 00:31:54.007760 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-05 00:31:54.007767 | orchestrator | Thursday 05 February 2026 00:30:56 +0000 (0:00:07.275) 0:05:33.859 ***** 2026-02-05 00:31:54.007774 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:54.007781 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:54.007788 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:54.007795 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:54.007803 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:54.007810 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:54.007817 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:54.007824 | orchestrator | 2026-02-05 00:31:54.007831 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-05 00:31:54.007838 | orchestrator | Thursday 05 February 2026 00:30:57 +0000 (0:00:01.073) 0:05:34.933 ***** 2026-02-05 00:31:54.007845 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:54.007852 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:54.007859 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:54.007866 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:54.007873 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:54.007881 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:54.007888 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:54.007895 | orchestrator | 2026-02-05 00:31:54.007902 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-05 00:31:54.007909 | orchestrator | Thursday 05 February 2026 00:31:06 +0000 (0:00:09.219) 0:05:44.152 ***** 2026-02-05 00:31:54.007916 | orchestrator | changed: [testbed-manager] 2026-02-05 00:31:54.007924 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:54.007937 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:54.007950 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:54.007960 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:54.007969 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:54.007977 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:54.007985 | orchestrator | 2026-02-05 00:31:54.007994 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-05 00:31:54.008003 | orchestrator | Thursday 05 February 2026 00:31:10 +0000 (0:00:03.467) 0:05:47.619 ***** 2026-02-05 00:31:54.008011 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:54.008019 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:54.008028 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:54.008037 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:54.008049 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:54.008062 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:54.008073 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:54.008082 | orchestrator | 2026-02-05 00:31:54.008090 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-05 00:31:54.008099 | orchestrator | Thursday 05 February 2026 00:31:11 +0000 (0:00:01.236) 0:05:48.856 ***** 2026-02-05 00:31:54.008107 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:54.008115 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:54.008124 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:54.008133 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:54.008141 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:54.008149 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:54.008157 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:54.008169 | orchestrator | 2026-02-05 00:31:54.008183 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-05 00:31:54.008196 | orchestrator | Thursday 05 February 2026 00:31:12 +0000 (0:00:01.370) 0:05:50.226 ***** 2026-02-05 00:31:54.008205 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:31:54.008213 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:31:54.008222 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:31:54.008230 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:31:54.008238 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:31:54.008253 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:31:54.008261 | orchestrator | changed: [testbed-manager] 2026-02-05 00:31:54.008270 | orchestrator | 2026-02-05 00:31:54.008279 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-05 00:31:54.008291 | orchestrator | Thursday 05 February 2026 00:31:13 +0000 (0:00:00.502) 0:05:50.729 ***** 2026-02-05 00:31:54.008303 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:54.008316 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:54.008329 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:54.008341 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:54.008353 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:54.008365 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:54.008378 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:54.008390 | orchestrator | 2026-02-05 00:31:54.008453 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-05 00:31:54.008478 | orchestrator | Thursday 05 February 2026 00:31:23 +0000 (0:00:10.360) 0:06:01.089 ***** 2026-02-05 00:31:54.008486 | orchestrator | changed: [testbed-manager] 2026-02-05 00:31:54.008493 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:54.008500 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:54.008507 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:54.008514 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:54.008521 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:54.008528 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:54.008536 | orchestrator | 2026-02-05 00:31:54.008543 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-05 00:31:54.008550 | orchestrator | Thursday 05 February 2026 00:31:24 +0000 (0:00:00.914) 0:06:02.004 ***** 2026-02-05 00:31:54.008557 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:54.008564 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:54.008571 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:54.008578 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:54.008586 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:54.008593 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:54.008600 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:54.008607 | orchestrator | 2026-02-05 00:31:54.008614 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-05 00:31:54.008621 | orchestrator | Thursday 05 February 2026 00:31:35 +0000 (0:00:10.769) 0:06:12.773 ***** 2026-02-05 00:31:54.008628 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:54.008635 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:54.008642 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:54.008649 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:54.008656 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:54.008663 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:54.008670 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:54.008677 | orchestrator | 2026-02-05 00:31:54.008684 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-05 00:31:54.008719 | orchestrator | Thursday 05 February 2026 00:31:47 +0000 (0:00:11.668) 0:06:24.441 ***** 2026-02-05 00:31:54.008730 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-05 00:31:54.008743 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-05 00:31:54.008756 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-05 00:31:54.008766 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-05 00:31:54.008773 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-05 00:31:54.008780 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-05 00:31:54.008787 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-05 00:31:54.008794 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-05 00:31:54.008801 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-05 00:31:54.008808 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-05 00:31:54.008823 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-05 00:31:54.008830 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-05 00:31:54.008837 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-05 00:31:54.008844 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-05 00:31:54.008851 | orchestrator | 2026-02-05 00:31:54.008858 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-05 00:31:54.008865 | orchestrator | Thursday 05 February 2026 00:31:48 +0000 (0:00:01.202) 0:06:25.644 ***** 2026-02-05 00:31:54.008872 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:31:54.008884 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:31:54.008891 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:31:54.008898 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:31:54.008906 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:31:54.008913 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:31:54.008919 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:31:54.008926 | orchestrator | 2026-02-05 00:31:54.008934 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-05 00:31:54.008941 | orchestrator | Thursday 05 February 2026 00:31:48 +0000 (0:00:00.526) 0:06:26.170 ***** 2026-02-05 00:31:54.008948 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:54.008955 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:54.008962 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:54.008969 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:54.008976 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:54.008983 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:54.008990 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:54.008997 | orchestrator | 2026-02-05 00:31:54.009004 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-05 00:31:54.009012 | orchestrator | Thursday 05 February 2026 00:31:53 +0000 (0:00:04.185) 0:06:30.355 ***** 2026-02-05 00:31:54.009020 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:31:54.009026 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:31:54.009033 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:31:54.009040 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:31:54.009047 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:31:54.009054 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:31:54.009061 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:31:54.009068 | orchestrator | 2026-02-05 00:31:54.009078 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-05 00:31:54.009091 | orchestrator | Thursday 05 February 2026 00:31:53 +0000 (0:00:00.462) 0:06:30.818 ***** 2026-02-05 00:31:54.009103 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-05 00:31:54.009116 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-05 00:31:54.009128 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:31:54.009141 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-05 00:31:54.009153 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-05 00:31:54.009167 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:31:54.009176 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-05 00:31:54.009183 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-05 00:31:54.009190 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:31:54.009204 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-05 00:32:13.462737 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-05 00:32:13.462868 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:13.462880 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-05 00:32:13.462885 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-05 00:32:13.462889 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:32:13.462910 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-05 00:32:13.462916 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-05 00:32:13.462950 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:32:13.462956 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-05 00:32:13.462961 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-05 00:32:13.462966 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:32:13.462970 | orchestrator | 2026-02-05 00:32:13.462977 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-05 00:32:13.462982 | orchestrator | Thursday 05 February 2026 00:31:54 +0000 (0:00:00.672) 0:06:31.490 ***** 2026-02-05 00:32:13.462987 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:13.462991 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:13.462995 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:32:13.463000 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:13.463004 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:32:13.463008 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:32:13.463012 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:32:13.463017 | orchestrator | 2026-02-05 00:32:13.463025 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-05 00:32:13.463033 | orchestrator | Thursday 05 February 2026 00:31:54 +0000 (0:00:00.504) 0:06:31.995 ***** 2026-02-05 00:32:13.463039 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:13.463046 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:13.463052 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:32:13.463059 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:13.463066 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:32:13.463072 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:32:13.463079 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:32:13.463087 | orchestrator | 2026-02-05 00:32:13.463094 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-05 00:32:13.463101 | orchestrator | Thursday 05 February 2026 00:31:55 +0000 (0:00:00.504) 0:06:32.499 ***** 2026-02-05 00:32:13.463107 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:13.463111 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:13.463115 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:32:13.463119 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:13.463123 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:32:13.463127 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:32:13.463131 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:32:13.463135 | orchestrator | 2026-02-05 00:32:13.463139 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-05 00:32:13.463143 | orchestrator | Thursday 05 February 2026 00:31:55 +0000 (0:00:00.519) 0:06:33.019 ***** 2026-02-05 00:32:13.463147 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:13.463152 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:13.463156 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:13.463160 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:13.463164 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:13.463168 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:13.463172 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:13.463176 | orchestrator | 2026-02-05 00:32:13.463181 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-05 00:32:13.463185 | orchestrator | Thursday 05 February 2026 00:31:57 +0000 (0:00:02.001) 0:06:35.021 ***** 2026-02-05 00:32:13.463191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:32:13.463197 | orchestrator | 2026-02-05 00:32:13.463201 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-05 00:32:13.463205 | orchestrator | Thursday 05 February 2026 00:31:58 +0000 (0:00:00.840) 0:06:35.861 ***** 2026-02-05 00:32:13.463219 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:13.463223 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:32:13.463227 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:32:13.463231 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:32:13.463236 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:32:13.463240 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:32:13.463244 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:32:13.463248 | orchestrator | 2026-02-05 00:32:13.463252 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-05 00:32:13.463256 | orchestrator | Thursday 05 February 2026 00:31:59 +0000 (0:00:00.794) 0:06:36.656 ***** 2026-02-05 00:32:13.463260 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:13.463264 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:32:13.463268 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:32:13.463273 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:32:13.463278 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:32:13.463282 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:32:13.463287 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:32:13.463292 | orchestrator | 2026-02-05 00:32:13.463297 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-05 00:32:13.463302 | orchestrator | Thursday 05 February 2026 00:32:00 +0000 (0:00:00.883) 0:06:37.539 ***** 2026-02-05 00:32:13.463306 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:13.463311 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:32:13.463316 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:32:13.463321 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:32:13.463326 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:32:13.463330 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:32:13.463335 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:32:13.463340 | orchestrator | 2026-02-05 00:32:13.463345 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-05 00:32:13.463365 | orchestrator | Thursday 05 February 2026 00:32:01 +0000 (0:00:01.513) 0:06:39.053 ***** 2026-02-05 00:32:13.463372 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:13.463379 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:13.463386 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:13.463393 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:13.463400 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:13.463407 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:13.463414 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:13.463420 | orchestrator | 2026-02-05 00:32:13.463427 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-05 00:32:13.463435 | orchestrator | Thursday 05 February 2026 00:32:03 +0000 (0:00:01.389) 0:06:40.442 ***** 2026-02-05 00:32:13.463442 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:13.463449 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:32:13.463456 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:32:13.463462 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:32:13.463467 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:32:13.463472 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:32:13.463477 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:32:13.463482 | orchestrator | 2026-02-05 00:32:13.463487 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-05 00:32:13.463492 | orchestrator | Thursday 05 February 2026 00:32:04 +0000 (0:00:01.399) 0:06:41.842 ***** 2026-02-05 00:32:13.463497 | orchestrator | changed: [testbed-manager] 2026-02-05 00:32:13.463501 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:32:13.463506 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:32:13.463511 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:32:13.463515 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:32:13.463520 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:32:13.463525 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:32:13.463530 | orchestrator | 2026-02-05 00:32:13.463535 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-05 00:32:13.463545 | orchestrator | Thursday 05 February 2026 00:32:05 +0000 (0:00:01.415) 0:06:43.257 ***** 2026-02-05 00:32:13.463550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:32:13.463555 | orchestrator | 2026-02-05 00:32:13.463559 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-05 00:32:13.463564 | orchestrator | Thursday 05 February 2026 00:32:07 +0000 (0:00:01.043) 0:06:44.301 ***** 2026-02-05 00:32:13.463569 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:13.463574 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:13.463579 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:13.463583 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:13.463588 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:13.463593 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:13.463598 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:13.463603 | orchestrator | 2026-02-05 00:32:13.463607 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-05 00:32:13.463612 | orchestrator | Thursday 05 February 2026 00:32:08 +0000 (0:00:01.479) 0:06:45.780 ***** 2026-02-05 00:32:13.463617 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:13.463623 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:13.463628 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:13.463633 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:13.463637 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:13.463642 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:13.463657 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:13.463661 | orchestrator | 2026-02-05 00:32:13.463665 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-05 00:32:13.463669 | orchestrator | Thursday 05 February 2026 00:32:09 +0000 (0:00:01.173) 0:06:46.954 ***** 2026-02-05 00:32:13.463674 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:13.463678 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:13.463682 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:13.463686 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:13.463690 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:13.463694 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:13.463698 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:13.463716 | orchestrator | 2026-02-05 00:32:13.463724 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-05 00:32:13.463730 | orchestrator | Thursday 05 February 2026 00:32:10 +0000 (0:00:01.144) 0:06:48.098 ***** 2026-02-05 00:32:13.463737 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:13.463744 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:13.463750 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:13.463757 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:13.463764 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:13.463770 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:13.463776 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:13.463784 | orchestrator | 2026-02-05 00:32:13.463788 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-05 00:32:13.463793 | orchestrator | Thursday 05 February 2026 00:32:12 +0000 (0:00:01.494) 0:06:49.592 ***** 2026-02-05 00:32:13.463797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:32:13.463801 | orchestrator | 2026-02-05 00:32:13.463805 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:32:13.463810 | orchestrator | Thursday 05 February 2026 00:32:13 +0000 (0:00:00.832) 0:06:50.425 ***** 2026-02-05 00:32:13.463814 | orchestrator | 2026-02-05 00:32:13.463818 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:32:13.463822 | orchestrator | Thursday 05 February 2026 00:32:13 +0000 (0:00:00.038) 0:06:50.464 ***** 2026-02-05 00:32:13.463831 | orchestrator | 2026-02-05 00:32:13.463835 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:32:13.463839 | orchestrator | Thursday 05 February 2026 00:32:13 +0000 (0:00:00.044) 0:06:50.508 ***** 2026-02-05 00:32:13.463843 | orchestrator | 2026-02-05 00:32:13.463847 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:32:13.463856 | orchestrator | Thursday 05 February 2026 00:32:13 +0000 (0:00:00.038) 0:06:50.546 ***** 2026-02-05 00:32:40.171287 | orchestrator | 2026-02-05 00:32:40.171419 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:32:40.171444 | orchestrator | Thursday 05 February 2026 00:32:13 +0000 (0:00:00.037) 0:06:50.583 ***** 2026-02-05 00:32:40.171460 | orchestrator | 2026-02-05 00:32:40.171475 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:32:40.171491 | orchestrator | Thursday 05 February 2026 00:32:13 +0000 (0:00:00.043) 0:06:50.627 ***** 2026-02-05 00:32:40.171504 | orchestrator | 2026-02-05 00:32:40.171518 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:32:40.171533 | orchestrator | Thursday 05 February 2026 00:32:13 +0000 (0:00:00.038) 0:06:50.665 ***** 2026-02-05 00:32:40.171546 | orchestrator | 2026-02-05 00:32:40.171561 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-05 00:32:40.171575 | orchestrator | Thursday 05 February 2026 00:32:13 +0000 (0:00:00.037) 0:06:50.702 ***** 2026-02-05 00:32:40.171590 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:40.171605 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:40.171619 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:40.171634 | orchestrator | 2026-02-05 00:32:40.171648 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-05 00:32:40.171661 | orchestrator | Thursday 05 February 2026 00:32:14 +0000 (0:00:01.369) 0:06:52.072 ***** 2026-02-05 00:32:40.171676 | orchestrator | changed: [testbed-manager] 2026-02-05 00:32:40.171691 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:32:40.171707 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:32:40.171777 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:32:40.171791 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:32:40.171817 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:32:40.171831 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:32:40.171846 | orchestrator | 2026-02-05 00:32:40.171862 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-05 00:32:40.171877 | orchestrator | Thursday 05 February 2026 00:32:16 +0000 (0:00:01.468) 0:06:53.540 ***** 2026-02-05 00:32:40.171892 | orchestrator | changed: [testbed-manager] 2026-02-05 00:32:40.171908 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:32:40.171922 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:32:40.171937 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:32:40.171951 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:32:40.171966 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:32:40.171981 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:32:40.171996 | orchestrator | 2026-02-05 00:32:40.172012 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-05 00:32:40.172026 | orchestrator | Thursday 05 February 2026 00:32:17 +0000 (0:00:01.187) 0:06:54.727 ***** 2026-02-05 00:32:40.172040 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:40.172055 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:32:40.172170 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:32:40.172183 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:32:40.172196 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:32:40.172210 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:32:40.172225 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:32:40.172240 | orchestrator | 2026-02-05 00:32:40.172254 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-05 00:32:40.172270 | orchestrator | Thursday 05 February 2026 00:32:19 +0000 (0:00:02.318) 0:06:57.046 ***** 2026-02-05 00:32:40.172311 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:40.172321 | orchestrator | 2026-02-05 00:32:40.172345 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-05 00:32:40.172355 | orchestrator | Thursday 05 February 2026 00:32:19 +0000 (0:00:00.103) 0:06:57.149 ***** 2026-02-05 00:32:40.172367 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:40.172382 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:32:40.172396 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:32:40.172409 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:32:40.172422 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:32:40.172437 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:32:40.172452 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:32:40.172466 | orchestrator | 2026-02-05 00:32:40.172480 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-05 00:32:40.172495 | orchestrator | Thursday 05 February 2026 00:32:20 +0000 (0:00:01.011) 0:06:58.161 ***** 2026-02-05 00:32:40.172508 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:40.172521 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:40.172534 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:32:40.172549 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:40.172563 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:32:40.172575 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:32:40.172590 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:32:40.172603 | orchestrator | 2026-02-05 00:32:40.172617 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-05 00:32:40.172631 | orchestrator | Thursday 05 February 2026 00:32:21 +0000 (0:00:00.509) 0:06:58.670 ***** 2026-02-05 00:32:40.172646 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:32:40.172662 | orchestrator | 2026-02-05 00:32:40.172676 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-05 00:32:40.172690 | orchestrator | Thursday 05 February 2026 00:32:22 +0000 (0:00:01.032) 0:06:59.703 ***** 2026-02-05 00:32:40.172704 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:40.172744 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:40.172759 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:40.172772 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:40.172787 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:40.172802 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:40.172818 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:40.172833 | orchestrator | 2026-02-05 00:32:40.172849 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-05 00:32:40.172864 | orchestrator | Thursday 05 February 2026 00:32:23 +0000 (0:00:00.831) 0:07:00.535 ***** 2026-02-05 00:32:40.172874 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-05 00:32:40.172910 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-05 00:32:40.172925 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-05 00:32:40.172939 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-05 00:32:40.172951 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-05 00:32:40.172966 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-05 00:32:40.172980 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-05 00:32:40.172995 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-05 00:32:40.173009 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-05 00:32:40.173021 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-05 00:32:40.173034 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-05 00:32:40.173046 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-05 00:32:40.173077 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-05 00:32:40.173091 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-05 00:32:40.173105 | orchestrator | 2026-02-05 00:32:40.173119 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-05 00:32:40.173132 | orchestrator | Thursday 05 February 2026 00:32:25 +0000 (0:00:02.579) 0:07:03.115 ***** 2026-02-05 00:32:40.173146 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:40.173161 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:40.173175 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:32:40.173190 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:40.173203 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:32:40.173217 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:32:40.173232 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:32:40.173246 | orchestrator | 2026-02-05 00:32:40.173260 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-05 00:32:40.173274 | orchestrator | Thursday 05 February 2026 00:32:26 +0000 (0:00:00.665) 0:07:03.780 ***** 2026-02-05 00:32:40.173291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:32:40.173307 | orchestrator | 2026-02-05 00:32:40.173322 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-05 00:32:40.173336 | orchestrator | Thursday 05 February 2026 00:32:27 +0000 (0:00:00.780) 0:07:04.560 ***** 2026-02-05 00:32:40.173348 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:40.173361 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:40.173375 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:40.173389 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:40.173403 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:40.173417 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:40.173430 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:40.173443 | orchestrator | 2026-02-05 00:32:40.173456 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-05 00:32:40.173470 | orchestrator | Thursday 05 February 2026 00:32:28 +0000 (0:00:00.845) 0:07:05.406 ***** 2026-02-05 00:32:40.173484 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:40.173496 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:40.173508 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:40.173522 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:40.173535 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:40.173549 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:40.173562 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:40.173575 | orchestrator | 2026-02-05 00:32:40.173589 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-05 00:32:40.173603 | orchestrator | Thursday 05 February 2026 00:32:29 +0000 (0:00:00.998) 0:07:06.405 ***** 2026-02-05 00:32:40.173618 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:40.173632 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:40.173645 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:32:40.173658 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:40.173673 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:32:40.173686 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:32:40.173700 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:32:40.173713 | orchestrator | 2026-02-05 00:32:40.173799 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-05 00:32:40.173813 | orchestrator | Thursday 05 February 2026 00:32:29 +0000 (0:00:00.461) 0:07:06.866 ***** 2026-02-05 00:32:40.173827 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:40.173840 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:40.173854 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:40.173868 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:40.173881 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:40.173894 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:40.173923 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:40.173937 | orchestrator | 2026-02-05 00:32:40.173950 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-05 00:32:40.173964 | orchestrator | Thursday 05 February 2026 00:32:31 +0000 (0:00:01.615) 0:07:08.482 ***** 2026-02-05 00:32:40.173978 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:40.173992 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:40.174005 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:32:40.174093 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:40.174110 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:32:40.174124 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:32:40.174137 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:32:40.174152 | orchestrator | 2026-02-05 00:32:40.174166 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-05 00:32:40.174181 | orchestrator | Thursday 05 February 2026 00:32:31 +0000 (0:00:00.468) 0:07:08.950 ***** 2026-02-05 00:32:40.174197 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:40.174211 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:32:40.174224 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:32:40.174239 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:32:40.174252 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:32:40.174266 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:32:40.174301 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:12.723649 | orchestrator | 2026-02-05 00:33:12.723887 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-05 00:33:12.723918 | orchestrator | Thursday 05 February 2026 00:32:40 +0000 (0:00:08.462) 0:07:17.412 ***** 2026-02-05 00:33:12.723939 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:12.723984 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:12.724004 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:12.724020 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:12.724039 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:12.724055 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:12.724072 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:12.724088 | orchestrator | 2026-02-05 00:33:12.724131 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-05 00:33:12.724151 | orchestrator | Thursday 05 February 2026 00:32:41 +0000 (0:00:01.532) 0:07:18.945 ***** 2026-02-05 00:33:12.724170 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:12.724185 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:12.724202 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:12.724219 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:12.724239 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:12.724257 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:12.724276 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:12.724294 | orchestrator | 2026-02-05 00:33:12.724314 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-05 00:33:12.724332 | orchestrator | Thursday 05 February 2026 00:32:43 +0000 (0:00:01.728) 0:07:20.674 ***** 2026-02-05 00:33:12.724348 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:12.724367 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:12.724386 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:12.724404 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:12.724423 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:12.724441 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:12.724461 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:12.724478 | orchestrator | 2026-02-05 00:33:12.724494 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-05 00:33:12.724511 | orchestrator | Thursday 05 February 2026 00:32:45 +0000 (0:00:01.669) 0:07:22.344 ***** 2026-02-05 00:33:12.724530 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:12.724547 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:12.724563 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:12.724580 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:12.724628 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:12.724646 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:12.724664 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:12.724680 | orchestrator | 2026-02-05 00:33:12.724696 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-05 00:33:12.724713 | orchestrator | Thursday 05 February 2026 00:32:45 +0000 (0:00:00.818) 0:07:23.162 ***** 2026-02-05 00:33:12.724818 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:12.724843 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:12.724862 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:12.724880 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:12.724896 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:12.724913 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:12.724931 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:12.724948 | orchestrator | 2026-02-05 00:33:12.724966 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-05 00:33:12.724985 | orchestrator | Thursday 05 February 2026 00:32:46 +0000 (0:00:00.918) 0:07:24.081 ***** 2026-02-05 00:33:12.725015 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:12.725034 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:12.725052 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:12.725070 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:12.725088 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:12.725106 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:12.725123 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:12.725141 | orchestrator | 2026-02-05 00:33:12.725159 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-05 00:33:12.725177 | orchestrator | Thursday 05 February 2026 00:32:47 +0000 (0:00:00.467) 0:07:24.548 ***** 2026-02-05 00:33:12.725194 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:12.725213 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:12.725230 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:12.725248 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:12.725267 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:12.725285 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:12.725303 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:12.725322 | orchestrator | 2026-02-05 00:33:12.725341 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-05 00:33:12.725360 | orchestrator | Thursday 05 February 2026 00:32:47 +0000 (0:00:00.486) 0:07:25.035 ***** 2026-02-05 00:33:12.725380 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:12.725400 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:12.725418 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:12.725435 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:12.725455 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:12.725474 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:12.725491 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:12.725509 | orchestrator | 2026-02-05 00:33:12.725527 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-05 00:33:12.725545 | orchestrator | Thursday 05 February 2026 00:32:48 +0000 (0:00:00.503) 0:07:25.539 ***** 2026-02-05 00:33:12.725563 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:12.725581 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:12.725598 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:12.725617 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:12.725635 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:12.725652 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:12.725671 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:12.725689 | orchestrator | 2026-02-05 00:33:12.725707 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-05 00:33:12.725725 | orchestrator | Thursday 05 February 2026 00:32:48 +0000 (0:00:00.673) 0:07:26.212 ***** 2026-02-05 00:33:12.725773 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:12.725791 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:12.725809 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:12.725848 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:12.725866 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:12.725883 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:12.725900 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:12.725918 | orchestrator | 2026-02-05 00:33:12.725971 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-05 00:33:12.725990 | orchestrator | Thursday 05 February 2026 00:32:54 +0000 (0:00:05.651) 0:07:31.864 ***** 2026-02-05 00:33:12.726006 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:12.726106 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:12.726126 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:12.726144 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:12.726161 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:12.726177 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:12.726194 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:12.726211 | orchestrator | 2026-02-05 00:33:12.726229 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-05 00:33:12.726247 | orchestrator | Thursday 05 February 2026 00:32:55 +0000 (0:00:00.528) 0:07:32.393 ***** 2026-02-05 00:33:12.726267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:33:12.726289 | orchestrator | 2026-02-05 00:33:12.726307 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-05 00:33:12.726325 | orchestrator | Thursday 05 February 2026 00:32:56 +0000 (0:00:00.961) 0:07:33.355 ***** 2026-02-05 00:33:12.726343 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:12.726361 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:12.726380 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:12.726399 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:12.726419 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:12.726437 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:12.726456 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:12.726475 | orchestrator | 2026-02-05 00:33:12.726494 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-05 00:33:12.726512 | orchestrator | Thursday 05 February 2026 00:32:58 +0000 (0:00:02.114) 0:07:35.470 ***** 2026-02-05 00:33:12.726530 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:12.726550 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:12.726569 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:12.726587 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:12.726606 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:12.726624 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:12.726643 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:12.726661 | orchestrator | 2026-02-05 00:33:12.726679 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-05 00:33:12.726702 | orchestrator | Thursday 05 February 2026 00:32:59 +0000 (0:00:01.139) 0:07:36.609 ***** 2026-02-05 00:33:12.726722 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:12.726769 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:12.726780 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:12.726791 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:12.726802 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:12.726813 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:12.726823 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:12.726834 | orchestrator | 2026-02-05 00:33:12.726845 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-05 00:33:12.726856 | orchestrator | Thursday 05 February 2026 00:33:00 +0000 (0:00:00.841) 0:07:37.451 ***** 2026-02-05 00:33:12.726879 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:33:12.726892 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:33:12.726919 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:33:12.726930 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:33:12.726941 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:33:12.726951 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:33:12.726961 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:33:12.726970 | orchestrator | 2026-02-05 00:33:12.726980 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-05 00:33:12.726989 | orchestrator | Thursday 05 February 2026 00:33:02 +0000 (0:00:01.872) 0:07:39.323 ***** 2026-02-05 00:33:12.726999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:33:12.727010 | orchestrator | 2026-02-05 00:33:12.727019 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-05 00:33:12.727029 | orchestrator | Thursday 05 February 2026 00:33:02 +0000 (0:00:00.782) 0:07:40.105 ***** 2026-02-05 00:33:12.727039 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:12.727049 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:12.727058 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:12.727068 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:12.727077 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:12.727087 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:12.727096 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:12.727106 | orchestrator | 2026-02-05 00:33:12.727130 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-05 00:33:44.426240 | orchestrator | Thursday 05 February 2026 00:33:12 +0000 (0:00:09.859) 0:07:49.965 ***** 2026-02-05 00:33:44.426364 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:44.426386 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:44.426398 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:44.426411 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:44.426422 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:44.426432 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:44.426442 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:44.426453 | orchestrator | 2026-02-05 00:33:44.426464 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-05 00:33:44.426481 | orchestrator | Thursday 05 February 2026 00:33:14 +0000 (0:00:01.934) 0:07:51.899 ***** 2026-02-05 00:33:44.426498 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:44.426509 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:44.426520 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:44.426532 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:44.426544 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:44.426556 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:44.426568 | orchestrator | 2026-02-05 00:33:44.426580 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-05 00:33:44.426588 | orchestrator | Thursday 05 February 2026 00:33:15 +0000 (0:00:01.308) 0:07:53.208 ***** 2026-02-05 00:33:44.426596 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:44.426604 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:44.426612 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:44.426619 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:44.426631 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:44.426677 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:44.426689 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:44.426700 | orchestrator | 2026-02-05 00:33:44.426711 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-05 00:33:44.426723 | orchestrator | 2026-02-05 00:33:44.426736 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-05 00:33:44.426772 | orchestrator | Thursday 05 February 2026 00:33:17 +0000 (0:00:01.239) 0:07:54.447 ***** 2026-02-05 00:33:44.426784 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:44.426795 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:44.426805 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:44.426816 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:44.426827 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:44.426838 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:44.426850 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:44.426862 | orchestrator | 2026-02-05 00:33:44.426875 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-05 00:33:44.426888 | orchestrator | 2026-02-05 00:33:44.426901 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-05 00:33:44.426914 | orchestrator | Thursday 05 February 2026 00:33:17 +0000 (0:00:00.690) 0:07:55.138 ***** 2026-02-05 00:33:44.426927 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:44.426938 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:44.426950 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:44.426958 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:44.426965 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:44.426972 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:44.426979 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:44.426986 | orchestrator | 2026-02-05 00:33:44.426994 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-05 00:33:44.427014 | orchestrator | Thursday 05 February 2026 00:33:19 +0000 (0:00:01.353) 0:07:56.492 ***** 2026-02-05 00:33:44.427022 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:44.427029 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:44.427036 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:44.427043 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:44.427050 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:44.427057 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:44.427064 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:44.427071 | orchestrator | 2026-02-05 00:33:44.427078 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-05 00:33:44.427085 | orchestrator | Thursday 05 February 2026 00:33:20 +0000 (0:00:01.430) 0:07:57.922 ***** 2026-02-05 00:33:44.427092 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:44.427099 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:44.427107 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:44.427114 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:44.427121 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:44.427127 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:44.427135 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:44.427142 | orchestrator | 2026-02-05 00:33:44.427149 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-05 00:33:44.427156 | orchestrator | Thursday 05 February 2026 00:33:21 +0000 (0:00:00.480) 0:07:58.403 ***** 2026-02-05 00:33:44.427163 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:33:44.427173 | orchestrator | 2026-02-05 00:33:44.427180 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-05 00:33:44.427187 | orchestrator | Thursday 05 February 2026 00:33:22 +0000 (0:00:00.933) 0:07:59.336 ***** 2026-02-05 00:33:44.427197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:33:44.427216 | orchestrator | 2026-02-05 00:33:44.427224 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-05 00:33:44.427231 | orchestrator | Thursday 05 February 2026 00:33:22 +0000 (0:00:00.767) 0:08:00.104 ***** 2026-02-05 00:33:44.427238 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:44.427245 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:44.427252 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:44.427259 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:44.427266 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:44.427273 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:44.427280 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:44.427287 | orchestrator | 2026-02-05 00:33:44.427312 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-05 00:33:44.427320 | orchestrator | Thursday 05 February 2026 00:33:33 +0000 (0:00:10.190) 0:08:10.294 ***** 2026-02-05 00:33:44.427327 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:44.427334 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:44.427341 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:44.427348 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:44.427356 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:44.427362 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:44.427369 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:44.427376 | orchestrator | 2026-02-05 00:33:44.427384 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-05 00:33:44.427391 | orchestrator | Thursday 05 February 2026 00:33:33 +0000 (0:00:00.858) 0:08:11.153 ***** 2026-02-05 00:33:44.427398 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:44.427405 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:44.427412 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:44.427419 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:44.427426 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:44.427433 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:44.427440 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:44.427447 | orchestrator | 2026-02-05 00:33:44.427454 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-05 00:33:44.427461 | orchestrator | Thursday 05 February 2026 00:33:35 +0000 (0:00:01.319) 0:08:12.472 ***** 2026-02-05 00:33:44.427469 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:44.427476 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:44.427483 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:44.427490 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:44.427497 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:44.427504 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:44.427512 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:44.427519 | orchestrator | 2026-02-05 00:33:44.427526 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-05 00:33:44.427533 | orchestrator | Thursday 05 February 2026 00:33:37 +0000 (0:00:01.920) 0:08:14.393 ***** 2026-02-05 00:33:44.427541 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:44.427548 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:44.427555 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:44.427562 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:44.427569 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:44.427576 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:44.427583 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:44.427590 | orchestrator | 2026-02-05 00:33:44.427597 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-05 00:33:44.427605 | orchestrator | Thursday 05 February 2026 00:33:38 +0000 (0:00:01.258) 0:08:15.652 ***** 2026-02-05 00:33:44.427612 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:44.427619 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:44.427626 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:44.427638 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:44.427645 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:44.427652 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:44.427659 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:44.427666 | orchestrator | 2026-02-05 00:33:44.427674 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-05 00:33:44.427681 | orchestrator | 2026-02-05 00:33:44.427692 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-05 00:33:44.427699 | orchestrator | Thursday 05 February 2026 00:33:39 +0000 (0:00:01.133) 0:08:16.786 ***** 2026-02-05 00:33:44.427707 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:33:44.427714 | orchestrator | 2026-02-05 00:33:44.427721 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-05 00:33:44.427728 | orchestrator | Thursday 05 February 2026 00:33:40 +0000 (0:00:00.780) 0:08:17.566 ***** 2026-02-05 00:33:44.427735 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:44.427760 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:44.427768 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:44.427775 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:44.427782 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:44.427790 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:44.427796 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:44.427803 | orchestrator | 2026-02-05 00:33:44.427811 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-05 00:33:44.427818 | orchestrator | Thursday 05 February 2026 00:33:41 +0000 (0:00:01.034) 0:08:18.601 ***** 2026-02-05 00:33:44.427825 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:44.427832 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:44.427840 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:44.427847 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:44.427854 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:44.427861 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:44.427868 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:44.427875 | orchestrator | 2026-02-05 00:33:44.427882 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-05 00:33:44.427893 | orchestrator | Thursday 05 February 2026 00:33:42 +0000 (0:00:01.159) 0:08:19.760 ***** 2026-02-05 00:33:44.427906 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:33:44.427925 | orchestrator | 2026-02-05 00:33:44.427940 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-05 00:33:44.427952 | orchestrator | Thursday 05 February 2026 00:33:43 +0000 (0:00:00.857) 0:08:20.617 ***** 2026-02-05 00:33:44.427964 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:44.427975 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:44.427986 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:44.427998 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:44.428010 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:44.428021 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:44.428033 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:44.428044 | orchestrator | 2026-02-05 00:33:44.428064 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-05 00:33:45.958938 | orchestrator | Thursday 05 February 2026 00:33:44 +0000 (0:00:01.059) 0:08:21.676 ***** 2026-02-05 00:33:45.959050 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:45.959066 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:45.959078 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:45.959096 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:45.959138 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:45.959158 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:45.959169 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:45.959180 | orchestrator | 2026-02-05 00:33:45.959222 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:33:45.959235 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-05 00:33:45.959247 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-05 00:33:45.959258 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-05 00:33:45.959269 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-05 00:33:45.959280 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-05 00:33:45.959290 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-05 00:33:45.959301 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-05 00:33:45.959320 | orchestrator | 2026-02-05 00:33:45.959333 | orchestrator | 2026-02-05 00:33:45.959344 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:33:45.959356 | orchestrator | Thursday 05 February 2026 00:33:45 +0000 (0:00:01.098) 0:08:22.775 ***** 2026-02-05 00:33:45.959366 | orchestrator | =============================================================================== 2026-02-05 00:33:45.959377 | orchestrator | osism.commons.packages : Install required packages --------------------- 85.19s 2026-02-05 00:33:45.959388 | orchestrator | osism.commons.packages : Download required packages -------------------- 41.55s 2026-02-05 00:33:45.959399 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.14s 2026-02-05 00:33:45.959409 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.21s 2026-02-05 00:33:45.959420 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.67s 2026-02-05 00:33:45.959445 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.77s 2026-02-05 00:33:45.959456 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.73s 2026-02-05 00:33:45.959467 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.36s 2026-02-05 00:33:45.959479 | orchestrator | osism.services.smartd : Install smartmontools package ------------------ 10.19s 2026-02-05 00:33:45.959492 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.86s 2026-02-05 00:33:45.959505 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.22s 2026-02-05 00:33:45.959517 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.77s 2026-02-05 00:33:45.959529 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.68s 2026-02-05 00:33:45.959542 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.46s 2026-02-05 00:33:45.959554 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.26s 2026-02-05 00:33:45.959566 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.28s 2026-02-05 00:33:45.959579 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.05s 2026-02-05 00:33:45.959592 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.88s 2026-02-05 00:33:45.959605 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.91s 2026-02-05 00:33:45.959617 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.71s 2026-02-05 00:33:46.241282 | orchestrator | + osism apply fail2ban 2026-02-05 00:33:58.251456 | orchestrator | 2026-02-05 00:33:58 | INFO  | Task 97e15151-aec8-4358-88a6-4a78aab9a83d (fail2ban) was prepared for execution. 2026-02-05 00:33:58.251566 | orchestrator | 2026-02-05 00:33:58 | INFO  | It takes a moment until task 97e15151-aec8-4358-88a6-4a78aab9a83d (fail2ban) has been started and output is visible here. 2026-02-05 00:34:20.779334 | orchestrator | 2026-02-05 00:34:20.779452 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-05 00:34:20.779488 | orchestrator | 2026-02-05 00:34:20.779521 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-05 00:34:20.779552 | orchestrator | Thursday 05 February 2026 00:34:02 +0000 (0:00:00.242) 0:00:00.242 ***** 2026-02-05 00:34:20.779574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:34:20.779595 | orchestrator | 2026-02-05 00:34:20.779613 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-05 00:34:20.779685 | orchestrator | Thursday 05 February 2026 00:34:03 +0000 (0:00:01.010) 0:00:01.252 ***** 2026-02-05 00:34:20.779706 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:20.779726 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:20.779744 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:20.779822 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:20.779890 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:20.779914 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:20.779935 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:20.779953 | orchestrator | 2026-02-05 00:34:20.779974 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-05 00:34:20.779987 | orchestrator | Thursday 05 February 2026 00:34:15 +0000 (0:00:12.189) 0:00:13.442 ***** 2026-02-05 00:34:20.780000 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:20.780013 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:20.780026 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:20.780038 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:20.780052 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:20.780064 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:20.780077 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:20.780089 | orchestrator | 2026-02-05 00:34:20.780101 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-05 00:34:20.780114 | orchestrator | Thursday 05 February 2026 00:34:17 +0000 (0:00:01.609) 0:00:15.052 ***** 2026-02-05 00:34:20.780127 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:20.780139 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:20.780150 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:20.780160 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:20.780171 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:20.780182 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:20.780193 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:20.780204 | orchestrator | 2026-02-05 00:34:20.780214 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-05 00:34:20.780225 | orchestrator | Thursday 05 February 2026 00:34:18 +0000 (0:00:01.504) 0:00:16.556 ***** 2026-02-05 00:34:20.780236 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:20.780247 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:20.780258 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:20.780269 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:20.780280 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:20.780290 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:20.780301 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:20.780312 | orchestrator | 2026-02-05 00:34:20.780323 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:34:20.780334 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:34:20.780379 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:34:20.780391 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:34:20.780402 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:34:20.780413 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:34:20.780424 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:34:20.780435 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:34:20.780445 | orchestrator | 2026-02-05 00:34:20.780456 | orchestrator | 2026-02-05 00:34:20.780467 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:34:20.780478 | orchestrator | Thursday 05 February 2026 00:34:20 +0000 (0:00:01.675) 0:00:18.232 ***** 2026-02-05 00:34:20.780488 | orchestrator | =============================================================================== 2026-02-05 00:34:20.780499 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.19s 2026-02-05 00:34:20.780510 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.68s 2026-02-05 00:34:20.780520 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.61s 2026-02-05 00:34:20.780531 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.50s 2026-02-05 00:34:20.780542 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.01s 2026-02-05 00:34:21.057498 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-05 00:34:21.057609 | orchestrator | + osism apply network 2026-02-05 00:34:33.120433 | orchestrator | 2026-02-05 00:34:33 | INFO  | Task 6da2557e-5c7a-45c4-a607-b4ada0536442 (network) was prepared for execution. 2026-02-05 00:34:33.120559 | orchestrator | 2026-02-05 00:34:33 | INFO  | It takes a moment until task 6da2557e-5c7a-45c4-a607-b4ada0536442 (network) has been started and output is visible here. 2026-02-05 00:35:01.375859 | orchestrator | 2026-02-05 00:35:01.375977 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-05 00:35:01.375994 | orchestrator | 2026-02-05 00:35:01.376007 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-05 00:35:01.376019 | orchestrator | Thursday 05 February 2026 00:34:37 +0000 (0:00:00.270) 0:00:00.270 ***** 2026-02-05 00:35:01.376030 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:01.376042 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:35:01.376053 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:35:01.376064 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:35:01.376075 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:35:01.376085 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:35:01.376096 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:35:01.376107 | orchestrator | 2026-02-05 00:35:01.376118 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-05 00:35:01.376129 | orchestrator | Thursday 05 February 2026 00:34:38 +0000 (0:00:00.697) 0:00:00.968 ***** 2026-02-05 00:35:01.376142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:35:01.376155 | orchestrator | 2026-02-05 00:35:01.376166 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-05 00:35:01.376177 | orchestrator | Thursday 05 February 2026 00:34:39 +0000 (0:00:01.159) 0:00:02.128 ***** 2026-02-05 00:35:01.376213 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:01.376225 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:35:01.376235 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:35:01.376246 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:35:01.376257 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:35:01.376267 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:35:01.376278 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:35:01.376288 | orchestrator | 2026-02-05 00:35:01.376299 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-05 00:35:01.376310 | orchestrator | Thursday 05 February 2026 00:34:41 +0000 (0:00:02.215) 0:00:04.343 ***** 2026-02-05 00:35:01.376321 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:01.376332 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:35:01.376342 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:35:01.376353 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:35:01.376364 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:35:01.376377 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:35:01.376390 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:35:01.376403 | orchestrator | 2026-02-05 00:35:01.376415 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-05 00:35:01.376429 | orchestrator | Thursday 05 February 2026 00:34:43 +0000 (0:00:01.902) 0:00:06.245 ***** 2026-02-05 00:35:01.376441 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-05 00:35:01.376455 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-05 00:35:01.376467 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-05 00:35:01.376480 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-05 00:35:01.376492 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-05 00:35:01.376504 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-05 00:35:01.376518 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-05 00:35:01.376531 | orchestrator | 2026-02-05 00:35:01.376544 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-05 00:35:01.376557 | orchestrator | Thursday 05 February 2026 00:34:44 +0000 (0:00:01.034) 0:00:07.280 ***** 2026-02-05 00:35:01.376584 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 00:35:01.376597 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 00:35:01.376609 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 00:35:01.376622 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 00:35:01.376635 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 00:35:01.376648 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 00:35:01.376660 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 00:35:01.376670 | orchestrator | 2026-02-05 00:35:01.376681 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-05 00:35:01.376692 | orchestrator | Thursday 05 February 2026 00:34:47 +0000 (0:00:03.143) 0:00:10.423 ***** 2026-02-05 00:35:01.376703 | orchestrator | changed: [testbed-manager] 2026-02-05 00:35:01.376714 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:35:01.376724 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:35:01.376735 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:35:01.376746 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:35:01.376816 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:35:01.376828 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:35:01.376839 | orchestrator | 2026-02-05 00:35:01.376850 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-05 00:35:01.376861 | orchestrator | Thursday 05 February 2026 00:34:49 +0000 (0:00:01.492) 0:00:11.916 ***** 2026-02-05 00:35:01.376871 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 00:35:01.376882 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 00:35:01.376893 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 00:35:01.376903 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 00:35:01.376914 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 00:35:01.376934 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 00:35:01.376945 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 00:35:01.376955 | orchestrator | 2026-02-05 00:35:01.376966 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-05 00:35:01.376977 | orchestrator | Thursday 05 February 2026 00:34:50 +0000 (0:00:01.511) 0:00:13.427 ***** 2026-02-05 00:35:01.376987 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:01.376998 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:35:01.377009 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:35:01.377020 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:35:01.377030 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:35:01.377041 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:35:01.377052 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:35:01.377062 | orchestrator | 2026-02-05 00:35:01.377073 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-05 00:35:01.377102 | orchestrator | Thursday 05 February 2026 00:34:51 +0000 (0:00:01.030) 0:00:14.458 ***** 2026-02-05 00:35:01.377113 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:35:01.377124 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:35:01.377135 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:35:01.377145 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:35:01.377156 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:35:01.377166 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:35:01.377177 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:35:01.377188 | orchestrator | 2026-02-05 00:35:01.377199 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-05 00:35:01.377209 | orchestrator | Thursday 05 February 2026 00:34:52 +0000 (0:00:00.558) 0:00:15.017 ***** 2026-02-05 00:35:01.377220 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:01.377231 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:35:01.377241 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:35:01.377252 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:35:01.377263 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:35:01.377274 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:35:01.377284 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:35:01.377295 | orchestrator | 2026-02-05 00:35:01.377306 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-05 00:35:01.377317 | orchestrator | Thursday 05 February 2026 00:34:54 +0000 (0:00:02.407) 0:00:17.425 ***** 2026-02-05 00:35:01.377327 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:35:01.377338 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:35:01.377349 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:35:01.377359 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:35:01.377370 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:35:01.377380 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:35:01.377392 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-05 00:35:01.377404 | orchestrator | 2026-02-05 00:35:01.377415 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-05 00:35:01.377426 | orchestrator | Thursday 05 February 2026 00:34:55 +0000 (0:00:00.885) 0:00:18.310 ***** 2026-02-05 00:35:01.377437 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:01.377447 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:35:01.377458 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:35:01.377469 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:35:01.377479 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:35:01.377490 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:35:01.377501 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:35:01.377511 | orchestrator | 2026-02-05 00:35:01.377531 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-05 00:35:01.377551 | orchestrator | Thursday 05 February 2026 00:34:57 +0000 (0:00:01.743) 0:00:20.054 ***** 2026-02-05 00:35:01.377578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:35:01.377614 | orchestrator | 2026-02-05 00:35:01.377632 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-05 00:35:01.377651 | orchestrator | Thursday 05 February 2026 00:34:58 +0000 (0:00:01.233) 0:00:21.287 ***** 2026-02-05 00:35:01.377668 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:01.377686 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:35:01.377705 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:35:01.377725 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:35:01.377744 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:35:01.377804 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:35:01.377818 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:35:01.377829 | orchestrator | 2026-02-05 00:35:01.377839 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-05 00:35:01.377850 | orchestrator | Thursday 05 February 2026 00:34:59 +0000 (0:00:00.979) 0:00:22.267 ***** 2026-02-05 00:35:01.377861 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:01.377872 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:35:01.377883 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:35:01.377893 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:35:01.377904 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:35:01.377915 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:35:01.377925 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:35:01.377936 | orchestrator | 2026-02-05 00:35:01.377947 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-05 00:35:01.377957 | orchestrator | Thursday 05 February 2026 00:35:00 +0000 (0:00:00.769) 0:00:23.036 ***** 2026-02-05 00:35:01.377968 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:35:01.377980 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:35:01.377990 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:35:01.378001 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:35:01.378012 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:35:01.378163 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:35:01.378175 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:35:01.378186 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:35:01.378197 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:35:01.378207 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:35:01.378218 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:35:01.378229 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:35:01.378240 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:35:01.378250 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:35:01.378261 | orchestrator | 2026-02-05 00:35:01.378285 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-05 00:35:16.638371 | orchestrator | Thursday 05 February 2026 00:35:01 +0000 (0:00:01.168) 0:00:24.205 ***** 2026-02-05 00:35:16.638487 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:35:16.638500 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:35:16.638508 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:35:16.638514 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:35:16.638521 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:35:16.638528 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:35:16.638535 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:35:16.638541 | orchestrator | 2026-02-05 00:35:16.638549 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-05 00:35:16.638577 | orchestrator | Thursday 05 February 2026 00:35:01 +0000 (0:00:00.600) 0:00:24.805 ***** 2026-02-05 00:35:16.638586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-manager, testbed-node-2, testbed-node-5, testbed-node-3, testbed-node-4 2026-02-05 00:35:16.638594 | orchestrator | 2026-02-05 00:35:16.638601 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-05 00:35:16.638607 | orchestrator | Thursday 05 February 2026 00:35:06 +0000 (0:00:04.353) 0:00:29.159 ***** 2026-02-05 00:35:16.638615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638636 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638655 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:16.638670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:16.638690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:16.638697 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:16.638724 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:16.638737 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:16.638744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:16.638809 | orchestrator | 2026-02-05 00:35:16.638816 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-05 00:35:16.638823 | orchestrator | Thursday 05 February 2026 00:35:11 +0000 (0:00:05.090) 0:00:34.249 ***** 2026-02-05 00:35:16.638830 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638856 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638866 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:16.638873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:35:16.638886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:16.638892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:16.638898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:16.638910 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:16.638925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:22.254114 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:35:22.254215 | orchestrator | 2026-02-05 00:35:22.254228 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-05 00:35:22.254239 | orchestrator | Thursday 05 February 2026 00:35:16 +0000 (0:00:05.212) 0:00:39.462 ***** 2026-02-05 00:35:22.254250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:35:22.254260 | orchestrator | 2026-02-05 00:35:22.254269 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-05 00:35:22.254278 | orchestrator | Thursday 05 February 2026 00:35:17 +0000 (0:00:01.051) 0:00:40.513 ***** 2026-02-05 00:35:22.254287 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:22.254296 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:35:22.254305 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:35:22.254313 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:35:22.254322 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:35:22.254331 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:35:22.254339 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:35:22.254348 | orchestrator | 2026-02-05 00:35:22.254356 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-05 00:35:22.254365 | orchestrator | Thursday 05 February 2026 00:35:18 +0000 (0:00:01.019) 0:00:41.533 ***** 2026-02-05 00:35:22.254387 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:35:22.254407 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:35:22.254423 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:35:22.254437 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:35:22.254451 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:35:22.254466 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:35:22.254481 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:35:22.254496 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:35:22.254510 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:35:22.254524 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:35:22.254534 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:35:22.254557 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:35:22.254567 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:35:22.254576 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:35:22.254584 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:35:22.254615 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:35:22.254626 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:35:22.254636 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:35:22.254646 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:35:22.254656 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:35:22.254666 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:35:22.254676 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:35:22.254687 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:35:22.254695 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:35:22.254704 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:35:22.254713 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:35:22.254721 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:35:22.254729 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:35:22.254738 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:35:22.254774 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:35:22.254785 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:35:22.254793 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:35:22.254802 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:35:22.254810 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:35:22.254819 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:35:22.254827 | orchestrator | 2026-02-05 00:35:22.254836 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-05 00:35:22.254860 | orchestrator | Thursday 05 February 2026 00:35:20 +0000 (0:00:01.883) 0:00:43.416 ***** 2026-02-05 00:35:22.254869 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:35:22.254878 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:35:22.254887 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:35:22.254895 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:35:22.254904 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:35:22.254912 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:35:22.254920 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:35:22.254929 | orchestrator | 2026-02-05 00:35:22.254938 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-05 00:35:22.254946 | orchestrator | Thursday 05 February 2026 00:35:21 +0000 (0:00:00.610) 0:00:44.027 ***** 2026-02-05 00:35:22.254955 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:35:22.254963 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:35:22.254972 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:35:22.254980 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:35:22.254990 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:35:22.254998 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:35:22.255007 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:35:22.255015 | orchestrator | 2026-02-05 00:35:22.255024 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:35:22.255034 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 00:35:22.255044 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 00:35:22.255060 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 00:35:22.255069 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 00:35:22.255077 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 00:35:22.255086 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 00:35:22.255094 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 00:35:22.255103 | orchestrator | 2026-02-05 00:35:22.255112 | orchestrator | 2026-02-05 00:35:22.255120 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:35:22.255129 | orchestrator | Thursday 05 February 2026 00:35:21 +0000 (0:00:00.667) 0:00:44.694 ***** 2026-02-05 00:35:22.255138 | orchestrator | =============================================================================== 2026-02-05 00:35:22.255146 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.21s 2026-02-05 00:35:22.255155 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.09s 2026-02-05 00:35:22.255164 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.35s 2026-02-05 00:35:22.255172 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.14s 2026-02-05 00:35:22.255181 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.41s 2026-02-05 00:35:22.255189 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.22s 2026-02-05 00:35:22.255198 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.90s 2026-02-05 00:35:22.255207 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.88s 2026-02-05 00:35:22.255215 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.74s 2026-02-05 00:35:22.255224 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.51s 2026-02-05 00:35:22.255232 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.49s 2026-02-05 00:35:22.255241 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.23s 2026-02-05 00:35:22.255249 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.17s 2026-02-05 00:35:22.255258 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.16s 2026-02-05 00:35:22.255266 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.05s 2026-02-05 00:35:22.255275 | orchestrator | osism.commons.network : Create required directories --------------------- 1.03s 2026-02-05 00:35:22.255283 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.03s 2026-02-05 00:35:22.255292 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.02s 2026-02-05 00:35:22.255300 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2026-02-05 00:35:22.255309 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.89s 2026-02-05 00:35:22.550990 | orchestrator | + osism apply wireguard 2026-02-05 00:35:34.515162 | orchestrator | 2026-02-05 00:35:34 | INFO  | Task ffe5a838-dbd7-49dc-81e0-8eddace93047 (wireguard) was prepared for execution. 2026-02-05 00:35:34.515276 | orchestrator | 2026-02-05 00:35:34 | INFO  | It takes a moment until task ffe5a838-dbd7-49dc-81e0-8eddace93047 (wireguard) has been started and output is visible here. 2026-02-05 00:35:53.653324 | orchestrator | 2026-02-05 00:35:53.653445 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-05 00:35:53.653489 | orchestrator | 2026-02-05 00:35:53.653502 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-05 00:35:53.653513 | orchestrator | Thursday 05 February 2026 00:35:38 +0000 (0:00:00.230) 0:00:00.230 ***** 2026-02-05 00:35:53.653524 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:53.653536 | orchestrator | 2026-02-05 00:35:53.653547 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-05 00:35:53.653558 | orchestrator | Thursday 05 February 2026 00:35:39 +0000 (0:00:01.410) 0:00:01.640 ***** 2026-02-05 00:35:53.653569 | orchestrator | changed: [testbed-manager] 2026-02-05 00:35:53.653581 | orchestrator | 2026-02-05 00:35:53.653596 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-05 00:35:53.653607 | orchestrator | Thursday 05 February 2026 00:35:46 +0000 (0:00:06.174) 0:00:07.814 ***** 2026-02-05 00:35:53.653618 | orchestrator | changed: [testbed-manager] 2026-02-05 00:35:53.653628 | orchestrator | 2026-02-05 00:35:53.653639 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-05 00:35:53.653650 | orchestrator | Thursday 05 February 2026 00:35:46 +0000 (0:00:00.549) 0:00:08.364 ***** 2026-02-05 00:35:53.653660 | orchestrator | changed: [testbed-manager] 2026-02-05 00:35:53.653671 | orchestrator | 2026-02-05 00:35:53.653681 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-05 00:35:53.653692 | orchestrator | Thursday 05 February 2026 00:35:47 +0000 (0:00:00.420) 0:00:08.785 ***** 2026-02-05 00:35:53.653703 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:53.653714 | orchestrator | 2026-02-05 00:35:53.653724 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-05 00:35:53.653763 | orchestrator | Thursday 05 February 2026 00:35:47 +0000 (0:00:00.666) 0:00:09.451 ***** 2026-02-05 00:35:53.653775 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:53.653788 | orchestrator | 2026-02-05 00:35:53.653801 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-05 00:35:53.653813 | orchestrator | Thursday 05 February 2026 00:35:48 +0000 (0:00:00.399) 0:00:09.850 ***** 2026-02-05 00:35:53.653826 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:53.653838 | orchestrator | 2026-02-05 00:35:53.653851 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-05 00:35:53.653863 | orchestrator | Thursday 05 February 2026 00:35:48 +0000 (0:00:00.393) 0:00:10.243 ***** 2026-02-05 00:35:53.653893 | orchestrator | changed: [testbed-manager] 2026-02-05 00:35:53.653906 | orchestrator | 2026-02-05 00:35:53.653918 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-05 00:35:53.653931 | orchestrator | Thursday 05 February 2026 00:35:49 +0000 (0:00:01.173) 0:00:11.417 ***** 2026-02-05 00:35:53.653943 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 00:35:53.653956 | orchestrator | changed: [testbed-manager] 2026-02-05 00:35:53.653969 | orchestrator | 2026-02-05 00:35:53.653981 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-05 00:35:53.653999 | orchestrator | Thursday 05 February 2026 00:35:50 +0000 (0:00:00.956) 0:00:12.373 ***** 2026-02-05 00:35:53.654089 | orchestrator | changed: [testbed-manager] 2026-02-05 00:35:53.654110 | orchestrator | 2026-02-05 00:35:53.654140 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-05 00:35:53.654159 | orchestrator | Thursday 05 February 2026 00:35:52 +0000 (0:00:01.699) 0:00:14.073 ***** 2026-02-05 00:35:53.654178 | orchestrator | changed: [testbed-manager] 2026-02-05 00:35:53.654195 | orchestrator | 2026-02-05 00:35:53.654213 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:35:53.654232 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:35:53.654253 | orchestrator | 2026-02-05 00:35:53.654271 | orchestrator | 2026-02-05 00:35:53.654289 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:35:53.654324 | orchestrator | Thursday 05 February 2026 00:35:53 +0000 (0:00:00.919) 0:00:14.992 ***** 2026-02-05 00:35:53.654345 | orchestrator | =============================================================================== 2026-02-05 00:35:53.654366 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.17s 2026-02-05 00:35:53.654384 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.70s 2026-02-05 00:35:53.654400 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.41s 2026-02-05 00:35:53.654411 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2026-02-05 00:35:53.654422 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2026-02-05 00:35:53.654432 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2026-02-05 00:35:53.654443 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.67s 2026-02-05 00:35:53.654453 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-02-05 00:35:53.654463 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2026-02-05 00:35:53.654474 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-02-05 00:35:53.654484 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.39s 2026-02-05 00:35:53.947193 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-05 00:35:53.979440 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-05 00:35:53.979537 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-05 00:35:54.055251 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 199 0 --:--:-- --:--:-- --:--:-- 202 2026-02-05 00:35:54.066436 | orchestrator | + osism apply --environment custom workarounds 2026-02-05 00:35:55.771257 | orchestrator | 2026-02-05 00:35:55 | INFO  | Trying to run play workarounds in environment custom 2026-02-05 00:36:05.897492 | orchestrator | 2026-02-05 00:36:05 | INFO  | Task 0006b7c2-be89-4552-949d-80952fe5932f (workarounds) was prepared for execution. 2026-02-05 00:36:05.897616 | orchestrator | 2026-02-05 00:36:05 | INFO  | It takes a moment until task 0006b7c2-be89-4552-949d-80952fe5932f (workarounds) has been started and output is visible here. 2026-02-05 00:36:29.192701 | orchestrator | 2026-02-05 00:36:29.192825 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:36:29.192838 | orchestrator | 2026-02-05 00:36:29.192846 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-05 00:36:29.192854 | orchestrator | Thursday 05 February 2026 00:36:09 +0000 (0:00:00.094) 0:00:00.094 ***** 2026-02-05 00:36:29.192862 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-05 00:36:29.192870 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-05 00:36:29.192877 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-05 00:36:29.192884 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-05 00:36:29.192892 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-05 00:36:29.192899 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-05 00:36:29.192906 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-05 00:36:29.192913 | orchestrator | 2026-02-05 00:36:29.192920 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-05 00:36:29.192927 | orchestrator | 2026-02-05 00:36:29.192934 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-05 00:36:29.192942 | orchestrator | Thursday 05 February 2026 00:36:10 +0000 (0:00:00.671) 0:00:00.766 ***** 2026-02-05 00:36:29.192949 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:29.192958 | orchestrator | 2026-02-05 00:36:29.192986 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-05 00:36:29.192994 | orchestrator | 2026-02-05 00:36:29.193001 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-05 00:36:29.193009 | orchestrator | Thursday 05 February 2026 00:36:12 +0000 (0:00:01.998) 0:00:02.765 ***** 2026-02-05 00:36:29.193016 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:29.193023 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:29.193030 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:29.193037 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:29.193044 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:29.193051 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:29.193058 | orchestrator | 2026-02-05 00:36:29.193065 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-05 00:36:29.193072 | orchestrator | 2026-02-05 00:36:29.193079 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-05 00:36:29.193098 | orchestrator | Thursday 05 February 2026 00:36:14 +0000 (0:00:01.796) 0:00:04.561 ***** 2026-02-05 00:36:29.193106 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 00:36:29.193115 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 00:36:29.193122 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 00:36:29.193129 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 00:36:29.193136 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 00:36:29.193143 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 00:36:29.193150 | orchestrator | 2026-02-05 00:36:29.193157 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-05 00:36:29.193164 | orchestrator | Thursday 05 February 2026 00:36:15 +0000 (0:00:01.425) 0:00:05.986 ***** 2026-02-05 00:36:29.193172 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:36:29.193179 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:36:29.193186 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:36:29.193193 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:36:29.193200 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:36:29.193207 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:36:29.193214 | orchestrator | 2026-02-05 00:36:29.193221 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-05 00:36:29.193228 | orchestrator | Thursday 05 February 2026 00:36:19 +0000 (0:00:03.436) 0:00:09.423 ***** 2026-02-05 00:36:29.193236 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:36:29.193243 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:36:29.193250 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:36:29.193257 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:36:29.193266 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:36:29.193274 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:36:29.193282 | orchestrator | 2026-02-05 00:36:29.193291 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-05 00:36:29.193299 | orchestrator | 2026-02-05 00:36:29.193307 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-05 00:36:29.193316 | orchestrator | Thursday 05 February 2026 00:36:19 +0000 (0:00:00.554) 0:00:09.978 ***** 2026-02-05 00:36:29.193325 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:36:29.193333 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:36:29.193341 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:36:29.193350 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:36:29.193358 | orchestrator | changed: [testbed-manager] 2026-02-05 00:36:29.193366 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:36:29.193375 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:36:29.193391 | orchestrator | 2026-02-05 00:36:29.193399 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-05 00:36:29.193408 | orchestrator | Thursday 05 February 2026 00:36:21 +0000 (0:00:01.523) 0:00:11.502 ***** 2026-02-05 00:36:29.193416 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:36:29.193424 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:36:29.193433 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:36:29.193441 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:36:29.193449 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:36:29.193457 | orchestrator | changed: [testbed-manager] 2026-02-05 00:36:29.193477 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:36:29.193486 | orchestrator | 2026-02-05 00:36:29.193495 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-05 00:36:29.193504 | orchestrator | Thursday 05 February 2026 00:36:22 +0000 (0:00:01.391) 0:00:12.893 ***** 2026-02-05 00:36:29.193513 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:29.193522 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:29.193530 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:29.193537 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:29.193544 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:29.193551 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:29.193558 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:29.193565 | orchestrator | 2026-02-05 00:36:29.193573 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-05 00:36:29.193580 | orchestrator | Thursday 05 February 2026 00:36:23 +0000 (0:00:01.426) 0:00:14.320 ***** 2026-02-05 00:36:29.193587 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:36:29.193594 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:36:29.193601 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:36:29.193608 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:36:29.193616 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:36:29.193623 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:36:29.193630 | orchestrator | changed: [testbed-manager] 2026-02-05 00:36:29.193637 | orchestrator | 2026-02-05 00:36:29.193644 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-05 00:36:29.193651 | orchestrator | Thursday 05 February 2026 00:36:25 +0000 (0:00:01.747) 0:00:16.067 ***** 2026-02-05 00:36:29.193658 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:36:29.193665 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:36:29.193673 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:36:29.193680 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:36:29.193687 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:36:29.193694 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:36:29.193701 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:36:29.193725 | orchestrator | 2026-02-05 00:36:29.193733 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-05 00:36:29.193740 | orchestrator | 2026-02-05 00:36:29.193747 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-05 00:36:29.193755 | orchestrator | Thursday 05 February 2026 00:36:26 +0000 (0:00:00.584) 0:00:16.652 ***** 2026-02-05 00:36:29.193762 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:29.193769 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:29.193776 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:29.193784 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:29.193791 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:29.193798 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:29.193809 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:29.193816 | orchestrator | 2026-02-05 00:36:29.193823 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:36:29.193832 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:36:29.193840 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:36:29.193853 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:36:29.193860 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:36:29.193867 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:36:29.193875 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:36:29.193882 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:36:29.193889 | orchestrator | 2026-02-05 00:36:29.193896 | orchestrator | 2026-02-05 00:36:29.193904 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:36:29.193911 | orchestrator | Thursday 05 February 2026 00:36:29 +0000 (0:00:02.853) 0:00:19.506 ***** 2026-02-05 00:36:29.193918 | orchestrator | =============================================================================== 2026-02-05 00:36:29.193925 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.44s 2026-02-05 00:36:29.193932 | orchestrator | Install python3-docker -------------------------------------------------- 2.85s 2026-02-05 00:36:29.193939 | orchestrator | Apply netplan configuration --------------------------------------------- 2.00s 2026-02-05 00:36:29.193947 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2026-02-05 00:36:29.193954 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.75s 2026-02-05 00:36:29.193961 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.52s 2026-02-05 00:36:29.193968 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.43s 2026-02-05 00:36:29.193975 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.43s 2026-02-05 00:36:29.193982 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.39s 2026-02-05 00:36:29.193989 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.67s 2026-02-05 00:36:29.193997 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.58s 2026-02-05 00:36:29.194008 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.55s 2026-02-05 00:36:29.575288 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-05 00:36:41.596504 | orchestrator | 2026-02-05 00:36:41 | INFO  | Task d29c309c-aeea-43a3-8294-e036d1b56ac7 (reboot) was prepared for execution. 2026-02-05 00:36:41.596605 | orchestrator | 2026-02-05 00:36:41 | INFO  | It takes a moment until task d29c309c-aeea-43a3-8294-e036d1b56ac7 (reboot) has been started and output is visible here. 2026-02-05 00:36:50.934507 | orchestrator | 2026-02-05 00:36:50.934617 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 00:36:50.934634 | orchestrator | 2026-02-05 00:36:50.934646 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 00:36:50.934658 | orchestrator | Thursday 05 February 2026 00:36:45 +0000 (0:00:00.181) 0:00:00.181 ***** 2026-02-05 00:36:50.934670 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:36:50.934721 | orchestrator | 2026-02-05 00:36:50.934733 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 00:36:50.934744 | orchestrator | Thursday 05 February 2026 00:36:45 +0000 (0:00:00.089) 0:00:00.270 ***** 2026-02-05 00:36:50.934755 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:36:50.934766 | orchestrator | 2026-02-05 00:36:50.934777 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 00:36:50.934814 | orchestrator | Thursday 05 February 2026 00:36:46 +0000 (0:00:00.881) 0:00:01.151 ***** 2026-02-05 00:36:50.934825 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:36:50.934836 | orchestrator | 2026-02-05 00:36:50.934850 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 00:36:50.934868 | orchestrator | 2026-02-05 00:36:50.934887 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 00:36:50.934904 | orchestrator | Thursday 05 February 2026 00:36:46 +0000 (0:00:00.098) 0:00:01.250 ***** 2026-02-05 00:36:50.934921 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:36:50.934940 | orchestrator | 2026-02-05 00:36:50.934958 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 00:36:50.934974 | orchestrator | Thursday 05 February 2026 00:36:46 +0000 (0:00:00.102) 0:00:01.352 ***** 2026-02-05 00:36:50.934985 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:36:50.934996 | orchestrator | 2026-02-05 00:36:50.935009 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 00:36:50.935037 | orchestrator | Thursday 05 February 2026 00:36:47 +0000 (0:00:00.665) 0:00:02.017 ***** 2026-02-05 00:36:50.935050 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:36:50.935062 | orchestrator | 2026-02-05 00:36:50.935075 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 00:36:50.935088 | orchestrator | 2026-02-05 00:36:50.935100 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 00:36:50.935113 | orchestrator | Thursday 05 February 2026 00:36:47 +0000 (0:00:00.099) 0:00:02.117 ***** 2026-02-05 00:36:50.935125 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:36:50.935138 | orchestrator | 2026-02-05 00:36:50.935151 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 00:36:50.935163 | orchestrator | Thursday 05 February 2026 00:36:47 +0000 (0:00:00.169) 0:00:02.286 ***** 2026-02-05 00:36:50.935175 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:36:50.935188 | orchestrator | 2026-02-05 00:36:50.935201 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 00:36:50.935214 | orchestrator | Thursday 05 February 2026 00:36:48 +0000 (0:00:00.641) 0:00:02.927 ***** 2026-02-05 00:36:50.935227 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:36:50.935238 | orchestrator | 2026-02-05 00:36:50.935251 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 00:36:50.935264 | orchestrator | 2026-02-05 00:36:50.935276 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 00:36:50.935286 | orchestrator | Thursday 05 February 2026 00:36:48 +0000 (0:00:00.109) 0:00:03.037 ***** 2026-02-05 00:36:50.935297 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:36:50.935308 | orchestrator | 2026-02-05 00:36:50.935319 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 00:36:50.935329 | orchestrator | Thursday 05 February 2026 00:36:48 +0000 (0:00:00.082) 0:00:03.119 ***** 2026-02-05 00:36:50.935340 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:36:50.935351 | orchestrator | 2026-02-05 00:36:50.935362 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 00:36:50.935372 | orchestrator | Thursday 05 February 2026 00:36:48 +0000 (0:00:00.633) 0:00:03.753 ***** 2026-02-05 00:36:50.935383 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:36:50.935393 | orchestrator | 2026-02-05 00:36:50.935404 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 00:36:50.935414 | orchestrator | 2026-02-05 00:36:50.935425 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 00:36:50.935436 | orchestrator | Thursday 05 February 2026 00:36:49 +0000 (0:00:00.102) 0:00:03.855 ***** 2026-02-05 00:36:50.935446 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:36:50.935457 | orchestrator | 2026-02-05 00:36:50.935468 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 00:36:50.935478 | orchestrator | Thursday 05 February 2026 00:36:49 +0000 (0:00:00.082) 0:00:03.938 ***** 2026-02-05 00:36:50.935497 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:36:50.935508 | orchestrator | 2026-02-05 00:36:50.935519 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 00:36:50.935529 | orchestrator | Thursday 05 February 2026 00:36:49 +0000 (0:00:00.671) 0:00:04.610 ***** 2026-02-05 00:36:50.935540 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:36:50.935551 | orchestrator | 2026-02-05 00:36:50.935562 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 00:36:50.935573 | orchestrator | 2026-02-05 00:36:50.935584 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 00:36:50.935595 | orchestrator | Thursday 05 February 2026 00:36:49 +0000 (0:00:00.107) 0:00:04.717 ***** 2026-02-05 00:36:50.935605 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:36:50.935616 | orchestrator | 2026-02-05 00:36:50.935626 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 00:36:50.935637 | orchestrator | Thursday 05 February 2026 00:36:50 +0000 (0:00:00.087) 0:00:04.805 ***** 2026-02-05 00:36:50.935648 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:36:50.935658 | orchestrator | 2026-02-05 00:36:50.935669 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 00:36:50.935701 | orchestrator | Thursday 05 February 2026 00:36:50 +0000 (0:00:00.674) 0:00:05.479 ***** 2026-02-05 00:36:50.935731 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:36:50.935742 | orchestrator | 2026-02-05 00:36:50.935753 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:36:50.935764 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:36:50.935776 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:36:50.935787 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:36:50.935798 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:36:50.935808 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:36:50.935819 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:36:50.935830 | orchestrator | 2026-02-05 00:36:50.935841 | orchestrator | 2026-02-05 00:36:50.935851 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:36:50.935862 | orchestrator | Thursday 05 February 2026 00:36:50 +0000 (0:00:00.029) 0:00:05.509 ***** 2026-02-05 00:36:50.935879 | orchestrator | =============================================================================== 2026-02-05 00:36:50.935890 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.17s 2026-02-05 00:36:50.935901 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.61s 2026-02-05 00:36:50.935911 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.55s 2026-02-05 00:36:51.113869 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-05 00:37:02.903384 | orchestrator | 2026-02-05 00:37:02 | INFO  | Task 9e532739-1016-413b-b304-57771c3d5bc6 (wait-for-connection) was prepared for execution. 2026-02-05 00:37:02.903530 | orchestrator | 2026-02-05 00:37:02 | INFO  | It takes a moment until task 9e532739-1016-413b-b304-57771c3d5bc6 (wait-for-connection) has been started and output is visible here. 2026-02-05 00:37:18.888995 | orchestrator | 2026-02-05 00:37:18.890615 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-05 00:37:18.890749 | orchestrator | 2026-02-05 00:37:18.890773 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-05 00:37:18.890787 | orchestrator | Thursday 05 February 2026 00:37:07 +0000 (0:00:00.232) 0:00:00.232 ***** 2026-02-05 00:37:18.890798 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:37:18.890850 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:37:18.890869 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:37:18.890890 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:37:18.890922 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:37:18.890939 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:37:18.890955 | orchestrator | 2026-02-05 00:37:18.890973 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:37:18.890991 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:37:18.891010 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:37:18.891028 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:37:18.891045 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:37:18.891064 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:37:18.891082 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:37:18.891100 | orchestrator | 2026-02-05 00:37:18.891120 | orchestrator | 2026-02-05 00:37:18.891139 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:37:18.891157 | orchestrator | Thursday 05 February 2026 00:37:18 +0000 (0:00:11.625) 0:00:11.858 ***** 2026-02-05 00:37:18.891174 | orchestrator | =============================================================================== 2026-02-05 00:37:18.891192 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.63s 2026-02-05 00:37:19.086804 | orchestrator | + osism apply hddtemp 2026-02-05 00:37:30.875190 | orchestrator | 2026-02-05 00:37:30 | INFO  | Task b712cf16-88ef-4c4b-b9e1-1b959bd9fea1 (hddtemp) was prepared for execution. 2026-02-05 00:37:30.875290 | orchestrator | 2026-02-05 00:37:30 | INFO  | It takes a moment until task b712cf16-88ef-4c4b-b9e1-1b959bd9fea1 (hddtemp) has been started and output is visible here. 2026-02-05 00:37:58.342797 | orchestrator | 2026-02-05 00:37:58.342895 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-05 00:37:58.342910 | orchestrator | 2026-02-05 00:37:58.342921 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-05 00:37:58.342931 | orchestrator | Thursday 05 February 2026 00:37:34 +0000 (0:00:00.186) 0:00:00.186 ***** 2026-02-05 00:37:58.342940 | orchestrator | ok: [testbed-manager] 2026-02-05 00:37:58.342951 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:37:58.342961 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:37:58.342970 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:37:58.342980 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:37:58.342989 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:37:58.342998 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:37:58.343008 | orchestrator | 2026-02-05 00:37:58.343018 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-05 00:37:58.343027 | orchestrator | Thursday 05 February 2026 00:37:35 +0000 (0:00:00.520) 0:00:00.707 ***** 2026-02-05 00:37:58.343038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:37:58.343071 | orchestrator | 2026-02-05 00:37:58.343081 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-05 00:37:58.343091 | orchestrator | Thursday 05 February 2026 00:37:36 +0000 (0:00:01.012) 0:00:01.720 ***** 2026-02-05 00:37:58.343100 | orchestrator | ok: [testbed-manager] 2026-02-05 00:37:58.343110 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:37:58.343119 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:37:58.343129 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:37:58.343138 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:37:58.343148 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:37:58.343158 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:37:58.343167 | orchestrator | 2026-02-05 00:37:58.343177 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-05 00:37:58.343199 | orchestrator | Thursday 05 February 2026 00:37:38 +0000 (0:00:02.067) 0:00:03.787 ***** 2026-02-05 00:37:58.343209 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:58.343219 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:37:58.343229 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:37:58.343238 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:37:58.343247 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:37:58.343257 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:37:58.343266 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:37:58.343276 | orchestrator | 2026-02-05 00:37:58.343285 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-05 00:37:58.343295 | orchestrator | Thursday 05 February 2026 00:37:39 +0000 (0:00:01.057) 0:00:04.845 ***** 2026-02-05 00:37:58.343304 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:37:58.343314 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:37:58.343323 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:37:58.343333 | orchestrator | ok: [testbed-manager] 2026-02-05 00:37:58.343342 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:37:58.343352 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:37:58.343363 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:37:58.343374 | orchestrator | 2026-02-05 00:37:58.343385 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-05 00:37:58.343396 | orchestrator | Thursday 05 February 2026 00:37:40 +0000 (0:00:01.025) 0:00:05.870 ***** 2026-02-05 00:37:58.343407 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:37:58.343418 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:37:58.343429 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:37:58.343440 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:58.343451 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:37:58.343462 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:37:58.343488 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:37:58.343500 | orchestrator | 2026-02-05 00:37:58.343511 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-05 00:37:58.343523 | orchestrator | Thursday 05 February 2026 00:37:41 +0000 (0:00:00.787) 0:00:06.658 ***** 2026-02-05 00:37:58.343533 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:58.343542 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:37:58.343552 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:37:58.343561 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:37:58.343570 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:37:58.343579 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:37:58.343589 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:37:58.343598 | orchestrator | 2026-02-05 00:37:58.343608 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-05 00:37:58.343642 | orchestrator | Thursday 05 February 2026 00:37:54 +0000 (0:00:13.134) 0:00:19.793 ***** 2026-02-05 00:37:58.343653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:37:58.343663 | orchestrator | 2026-02-05 00:37:58.343679 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-05 00:37:58.343689 | orchestrator | Thursday 05 February 2026 00:37:55 +0000 (0:00:01.160) 0:00:20.953 ***** 2026-02-05 00:37:58.343699 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:37:58.343708 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:37:58.343718 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:37:58.343727 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:37:58.343737 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:37:58.343746 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:37:58.343755 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:58.343765 | orchestrator | 2026-02-05 00:37:58.343780 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:37:58.343796 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:37:58.343833 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:37:58.343850 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:37:58.343865 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:37:58.343882 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:37:58.343897 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:37:58.343914 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:37:58.343931 | orchestrator | 2026-02-05 00:37:58.343947 | orchestrator | 2026-02-05 00:37:58.343964 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:37:58.343980 | orchestrator | Thursday 05 February 2026 00:37:58 +0000 (0:00:02.397) 0:00:23.351 ***** 2026-02-05 00:37:58.343994 | orchestrator | =============================================================================== 2026-02-05 00:37:58.344011 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.13s 2026-02-05 00:37:58.344027 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.40s 2026-02-05 00:37:58.344044 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.07s 2026-02-05 00:37:58.344068 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.16s 2026-02-05 00:37:58.344083 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.06s 2026-02-05 00:37:58.344100 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.03s 2026-02-05 00:37:58.344116 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.01s 2026-02-05 00:37:58.344132 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.79s 2026-02-05 00:37:58.344149 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.52s 2026-02-05 00:37:58.642926 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-05 00:37:58.703600 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 00:37:58.703812 | orchestrator | + sudo systemctl restart manager.service 2026-02-05 00:38:12.384887 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-05 00:38:12.384977 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-05 00:38:12.384987 | orchestrator | + local max_attempts=60 2026-02-05 00:38:12.384995 | orchestrator | + local name=ceph-ansible 2026-02-05 00:38:12.385002 | orchestrator | + local attempt_num=1 2026-02-05 00:38:12.385009 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:38:12.417709 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:38:12.418671 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:38:12.418706 | orchestrator | + sleep 5 2026-02-05 00:38:17.422266 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:38:17.556561 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:38:17.556669 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:38:17.556679 | orchestrator | + sleep 5 2026-02-05 00:38:22.559535 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:38:22.595499 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:38:22.595655 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:38:22.595675 | orchestrator | + sleep 5 2026-02-05 00:38:27.600918 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:38:27.833202 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:38:27.833303 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:38:27.833318 | orchestrator | + sleep 5 2026-02-05 00:38:32.845087 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:38:32.881358 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:38:32.881469 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:38:32.881487 | orchestrator | + sleep 5 2026-02-05 00:38:37.885564 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:38:37.922820 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:38:37.922919 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:38:37.922934 | orchestrator | + sleep 5 2026-02-05 00:38:42.927844 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:38:42.965682 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:38:42.965779 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:38:42.965794 | orchestrator | + sleep 5 2026-02-05 00:38:47.972908 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:38:48.021277 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 00:38:48.021373 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:38:48.021388 | orchestrator | + sleep 5 2026-02-05 00:38:53.023145 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:38:53.037151 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 00:38:53.037275 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:38:53.037293 | orchestrator | + sleep 5 2026-02-05 00:38:58.039901 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:38:58.075954 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 00:38:58.076055 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:38:58.076067 | orchestrator | + sleep 5 2026-02-05 00:39:03.079910 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:39:03.107694 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:03.107793 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:39:03.107808 | orchestrator | + sleep 5 2026-02-05 00:39:08.110842 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:39:08.143960 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:08.144068 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:39:08.144085 | orchestrator | + sleep 5 2026-02-05 00:39:13.147678 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:39:13.181507 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:13.181657 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:39:13.181674 | orchestrator | + sleep 5 2026-02-05 00:39:18.185426 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:39:18.225650 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:18.225736 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-05 00:39:18.225748 | orchestrator | + local max_attempts=60 2026-02-05 00:39:18.225757 | orchestrator | + local name=kolla-ansible 2026-02-05 00:39:18.225766 | orchestrator | + local attempt_num=1 2026-02-05 00:39:18.226526 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-05 00:39:18.262935 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:18.263021 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-05 00:39:18.263031 | orchestrator | + local max_attempts=60 2026-02-05 00:39:18.263063 | orchestrator | + local name=osism-ansible 2026-02-05 00:39:18.263071 | orchestrator | + local attempt_num=1 2026-02-05 00:39:18.263725 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-05 00:39:18.294511 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:18.294677 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-05 00:39:18.294700 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-05 00:39:18.453013 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-05 00:39:18.611415 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-05 00:39:18.757824 | orchestrator | ARA in osism-ansible already disabled. 2026-02-05 00:39:18.904398 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-05 00:39:18.904504 | orchestrator | + osism apply gather-facts 2026-02-05 00:39:31.001526 | orchestrator | 2026-02-05 00:39:30 | INFO  | Task cdb631c8-2362-43d0-8523-ec9fa3e6a1ee (gather-facts) was prepared for execution. 2026-02-05 00:39:31.001703 | orchestrator | 2026-02-05 00:39:30 | INFO  | It takes a moment until task cdb631c8-2362-43d0-8523-ec9fa3e6a1ee (gather-facts) has been started and output is visible here. 2026-02-05 00:39:43.888179 | orchestrator | 2026-02-05 00:39:43.888291 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 00:39:43.888307 | orchestrator | 2026-02-05 00:39:43.888318 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 00:39:43.888330 | orchestrator | Thursday 05 February 2026 00:39:34 +0000 (0:00:00.162) 0:00:00.162 ***** 2026-02-05 00:39:43.888341 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:39:43.888354 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:39:43.888366 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:39:43.888377 | orchestrator | ok: [testbed-manager] 2026-02-05 00:39:43.888388 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:39:43.888399 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:39:43.888410 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:39:43.888421 | orchestrator | 2026-02-05 00:39:43.888432 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-05 00:39:43.888443 | orchestrator | 2026-02-05 00:39:43.888454 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-05 00:39:43.888465 | orchestrator | Thursday 05 February 2026 00:39:43 +0000 (0:00:08.546) 0:00:08.708 ***** 2026-02-05 00:39:43.888476 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:39:43.888489 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:39:43.888500 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:39:43.888511 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:39:43.888522 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:39:43.888532 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:39:43.888543 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:39:43.888597 | orchestrator | 2026-02-05 00:39:43.888630 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:39:43.888642 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:39:43.888655 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:39:43.888666 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:39:43.888677 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:39:43.888688 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:39:43.888699 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:39:43.888711 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:39:43.888745 | orchestrator | 2026-02-05 00:39:43.888758 | orchestrator | 2026-02-05 00:39:43.888771 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:39:43.888784 | orchestrator | Thursday 05 February 2026 00:39:43 +0000 (0:00:00.437) 0:00:09.146 ***** 2026-02-05 00:39:43.888796 | orchestrator | =============================================================================== 2026-02-05 00:39:43.888809 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.55s 2026-02-05 00:39:43.888822 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2026-02-05 00:39:44.065065 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-05 00:39:44.079790 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-05 00:39:44.089812 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-05 00:39:44.099404 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-05 00:39:44.117212 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-05 00:39:44.129604 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-05 00:39:44.139367 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-05 00:39:44.148110 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-05 00:39:44.157188 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-05 00:39:44.167101 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-05 00:39:44.180280 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-05 00:39:44.190092 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-05 00:39:44.198982 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-05 00:39:44.208452 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-05 00:39:44.218465 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-05 00:39:44.227010 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-05 00:39:44.238197 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-05 00:39:44.244970 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-05 00:39:44.253076 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-05 00:39:44.261391 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-05 00:39:44.268296 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-05 00:39:44.277256 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-05 00:39:44.284153 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-05 00:39:44.294232 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-05 00:39:44.551335 | orchestrator | ok: Runtime: 0:23:55.115265 2026-02-05 00:39:44.662331 | 2026-02-05 00:39:44.662563 | TASK [Deploy services] 2026-02-05 00:39:45.197163 | orchestrator | skipping: Conditional result was False 2026-02-05 00:39:45.213406 | 2026-02-05 00:39:45.213564 | TASK [Deploy in a nutshell] 2026-02-05 00:39:45.925801 | orchestrator | + set -e 2026-02-05 00:39:45.925978 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 00:39:45.926004 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 00:39:45.926063 | orchestrator | ++ INTERACTIVE=false 2026-02-05 00:39:45.926081 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 00:39:45.926095 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 00:39:45.926123 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 00:39:45.926168 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 00:39:45.926198 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 00:39:45.926213 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 00:39:45.926229 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 00:39:45.926241 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 00:39:45.926260 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 00:39:45.926271 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 00:39:45.926291 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 00:39:45.926302 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 00:39:45.926317 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 00:39:45.926371 | orchestrator | ++ export ARA=false 2026-02-05 00:39:45.926391 | orchestrator | ++ ARA=false 2026-02-05 00:39:45.926411 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 00:39:45.926431 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 00:39:45.926450 | orchestrator | ++ export TEMPEST=true 2026-02-05 00:39:45.926469 | orchestrator | ++ TEMPEST=true 2026-02-05 00:39:45.926486 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 00:39:45.926506 | orchestrator | ++ IS_ZUUL=true 2026-02-05 00:39:45.926526 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.243 2026-02-05 00:39:45.926546 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.243 2026-02-05 00:39:45.926592 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 00:39:45.926612 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 00:39:45.926629 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 00:39:45.926648 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 00:39:45.926660 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 00:39:45.926671 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 00:39:45.926689 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 00:39:45.926700 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 00:39:45.926715 | orchestrator | 2026-02-05 00:39:45.926727 | orchestrator | # PULL IMAGES 2026-02-05 00:39:45.926737 | orchestrator | 2026-02-05 00:39:45.926752 | orchestrator | + echo 2026-02-05 00:39:45.926763 | orchestrator | + echo '# PULL IMAGES' 2026-02-05 00:39:45.926775 | orchestrator | + echo 2026-02-05 00:39:45.928493 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-05 00:39:45.983232 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 00:39:45.983446 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-05 00:39:47.637737 | orchestrator | 2026-02-05 00:39:47 | INFO  | Trying to run play pull-images in environment custom 2026-02-05 00:39:57.830715 | orchestrator | 2026-02-05 00:39:57 | INFO  | Task 461f0c66-d032-4d1d-b5dd-7dc64bdc236f (pull-images) was prepared for execution. 2026-02-05 00:39:57.830894 | orchestrator | 2026-02-05 00:39:57 | INFO  | Task 461f0c66-d032-4d1d-b5dd-7dc64bdc236f is running in background. No more output. Check ARA for logs. 2026-02-05 00:39:59.826755 | orchestrator | 2026-02-05 00:39:59 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-05 00:40:10.103017 | orchestrator | 2026-02-05 00:40:10 | INFO  | Task 2119a671-460d-4869-8304-07102a789aae (wipe-partitions) was prepared for execution. 2026-02-05 00:40:10.103128 | orchestrator | 2026-02-05 00:40:10 | INFO  | It takes a moment until task 2119a671-460d-4869-8304-07102a789aae (wipe-partitions) has been started and output is visible here. 2026-02-05 00:40:22.463017 | orchestrator | 2026-02-05 00:40:22.463157 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-05 00:40:22.463185 | orchestrator | 2026-02-05 00:40:22.463203 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-05 00:40:22.463229 | orchestrator | Thursday 05 February 2026 00:40:14 +0000 (0:00:00.111) 0:00:00.111 ***** 2026-02-05 00:40:22.463242 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:40:22.463253 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:40:22.463263 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:40:22.463279 | orchestrator | 2026-02-05 00:40:22.463296 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-05 00:40:22.463347 | orchestrator | Thursday 05 February 2026 00:40:15 +0000 (0:00:00.553) 0:00:00.665 ***** 2026-02-05 00:40:22.463366 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:22.463383 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:40:22.463398 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:40:22.463420 | orchestrator | 2026-02-05 00:40:22.463437 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-05 00:40:22.463453 | orchestrator | Thursday 05 February 2026 00:40:15 +0000 (0:00:00.277) 0:00:00.942 ***** 2026-02-05 00:40:22.463470 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:40:22.463488 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:40:22.463503 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:40:22.463519 | orchestrator | 2026-02-05 00:40:22.463565 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-05 00:40:22.463585 | orchestrator | Thursday 05 February 2026 00:40:15 +0000 (0:00:00.545) 0:00:01.487 ***** 2026-02-05 00:40:22.463601 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:22.463619 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:40:22.463635 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:40:22.463650 | orchestrator | 2026-02-05 00:40:22.463666 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-05 00:40:22.463684 | orchestrator | Thursday 05 February 2026 00:40:16 +0000 (0:00:00.210) 0:00:01.698 ***** 2026-02-05 00:40:22.463700 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-05 00:40:22.463725 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-05 00:40:22.463742 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-05 00:40:22.463758 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-05 00:40:22.463775 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-05 00:40:22.463792 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-05 00:40:22.463808 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-05 00:40:22.463825 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-05 00:40:22.463842 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-05 00:40:22.463857 | orchestrator | 2026-02-05 00:40:22.463873 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-05 00:40:22.463889 | orchestrator | Thursday 05 February 2026 00:40:17 +0000 (0:00:01.179) 0:00:02.878 ***** 2026-02-05 00:40:22.463905 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-05 00:40:22.463920 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-05 00:40:22.463935 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-05 00:40:22.463950 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-05 00:40:22.463965 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-05 00:40:22.463979 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-05 00:40:22.463994 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-05 00:40:22.464008 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-05 00:40:22.464023 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-05 00:40:22.464038 | orchestrator | 2026-02-05 00:40:22.464053 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-05 00:40:22.464069 | orchestrator | Thursday 05 February 2026 00:40:18 +0000 (0:00:01.504) 0:00:04.383 ***** 2026-02-05 00:40:22.464084 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-05 00:40:22.464101 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-05 00:40:22.464116 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-05 00:40:22.464133 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-05 00:40:22.464150 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-05 00:40:22.464177 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-05 00:40:22.464194 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-05 00:40:22.464211 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-05 00:40:22.464243 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-05 00:40:22.464260 | orchestrator | 2026-02-05 00:40:22.464277 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-05 00:40:22.464293 | orchestrator | Thursday 05 February 2026 00:40:21 +0000 (0:00:02.143) 0:00:06.526 ***** 2026-02-05 00:40:22.464308 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:40:22.464323 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:40:22.464339 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:40:22.464354 | orchestrator | 2026-02-05 00:40:22.464369 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-05 00:40:22.464385 | orchestrator | Thursday 05 February 2026 00:40:21 +0000 (0:00:00.583) 0:00:07.110 ***** 2026-02-05 00:40:22.464400 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:40:22.464415 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:40:22.464430 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:40:22.464445 | orchestrator | 2026-02-05 00:40:22.464460 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:40:22.464476 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:40:22.464495 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:40:22.464562 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:40:22.464582 | orchestrator | 2026-02-05 00:40:22.464599 | orchestrator | 2026-02-05 00:40:22.464614 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:40:22.464630 | orchestrator | Thursday 05 February 2026 00:40:22 +0000 (0:00:00.645) 0:00:07.755 ***** 2026-02-05 00:40:22.464645 | orchestrator | =============================================================================== 2026-02-05 00:40:22.464661 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.14s 2026-02-05 00:40:22.464676 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.50s 2026-02-05 00:40:22.464692 | orchestrator | Check device availability ----------------------------------------------- 1.18s 2026-02-05 00:40:22.464707 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2026-02-05 00:40:22.464723 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2026-02-05 00:40:22.464738 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.55s 2026-02-05 00:40:22.464753 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.55s 2026-02-05 00:40:22.464769 | orchestrator | Remove all rook related logical devices --------------------------------- 0.28s 2026-02-05 00:40:22.464784 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.21s 2026-02-05 00:40:34.433285 | orchestrator | 2026-02-05 00:40:34 | INFO  | Task 98b183aa-4abf-44b9-8153-ffbbe1685cd3 (facts) was prepared for execution. 2026-02-05 00:40:34.433417 | orchestrator | 2026-02-05 00:40:34 | INFO  | It takes a moment until task 98b183aa-4abf-44b9-8153-ffbbe1685cd3 (facts) has been started and output is visible here. 2026-02-05 00:40:46.158799 | orchestrator | 2026-02-05 00:40:46.158899 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-05 00:40:46.158912 | orchestrator | 2026-02-05 00:40:46.158924 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-05 00:40:46.158934 | orchestrator | Thursday 05 February 2026 00:40:38 +0000 (0:00:00.191) 0:00:00.191 ***** 2026-02-05 00:40:46.158945 | orchestrator | ok: [testbed-manager] 2026-02-05 00:40:46.158955 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:40:46.158966 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:40:46.158975 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:40:46.159010 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:40:46.159020 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:40:46.159030 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:40:46.159039 | orchestrator | 2026-02-05 00:40:46.159051 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-05 00:40:46.159061 | orchestrator | Thursday 05 February 2026 00:40:39 +0000 (0:00:00.984) 0:00:01.176 ***** 2026-02-05 00:40:46.159071 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:40:46.159082 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:40:46.159091 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:40:46.159101 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:40:46.159111 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:46.159120 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:40:46.159130 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:40:46.159140 | orchestrator | 2026-02-05 00:40:46.159149 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 00:40:46.159159 | orchestrator | 2026-02-05 00:40:46.159169 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 00:40:46.159178 | orchestrator | Thursday 05 February 2026 00:40:40 +0000 (0:00:01.051) 0:00:02.227 ***** 2026-02-05 00:40:46.159188 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:40:46.159198 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:40:46.159207 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:40:46.159218 | orchestrator | ok: [testbed-manager] 2026-02-05 00:40:46.159228 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:40:46.159237 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:40:46.159247 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:40:46.159256 | orchestrator | 2026-02-05 00:40:46.159266 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-05 00:40:46.159275 | orchestrator | 2026-02-05 00:40:46.159285 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-05 00:40:46.159309 | orchestrator | Thursday 05 February 2026 00:40:45 +0000 (0:00:05.436) 0:00:07.663 ***** 2026-02-05 00:40:46.159320 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:40:46.159329 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:40:46.159339 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:40:46.159348 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:40:46.159358 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:46.159368 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:40:46.159377 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:40:46.159387 | orchestrator | 2026-02-05 00:40:46.159396 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:40:46.159406 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:40:46.159418 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:40:46.159427 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:40:46.159437 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:40:46.159447 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:40:46.159456 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:40:46.159466 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:40:46.159476 | orchestrator | 2026-02-05 00:40:46.159503 | orchestrator | 2026-02-05 00:40:46.159523 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:40:46.159558 | orchestrator | Thursday 05 February 2026 00:40:45 +0000 (0:00:00.428) 0:00:08.092 ***** 2026-02-05 00:40:46.159568 | orchestrator | =============================================================================== 2026-02-05 00:40:46.159578 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.44s 2026-02-05 00:40:46.159596 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2026-02-05 00:40:46.159607 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.98s 2026-02-05 00:40:46.159616 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.43s 2026-02-05 00:40:48.107848 | orchestrator | 2026-02-05 00:40:48 | INFO  | Task c418db69-bcc6-4549-9c2d-26b3c4f31d21 (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-05 00:40:48.107937 | orchestrator | 2026-02-05 00:40:48 | INFO  | It takes a moment until task c418db69-bcc6-4549-9c2d-26b3c4f31d21 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-05 00:40:58.125580 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 00:40:58.125651 | orchestrator | 2.16.14 2026-02-05 00:40:58.125658 | orchestrator | 2026-02-05 00:40:58.125663 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-05 00:40:58.125669 | orchestrator | 2026-02-05 00:40:58.125675 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 00:40:58.125680 | orchestrator | Thursday 05 February 2026 00:40:51 +0000 (0:00:00.281) 0:00:00.281 ***** 2026-02-05 00:40:58.125684 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 00:40:58.125688 | orchestrator | 2026-02-05 00:40:58.125692 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 00:40:58.125696 | orchestrator | Thursday 05 February 2026 00:40:52 +0000 (0:00:00.217) 0:00:00.498 ***** 2026-02-05 00:40:58.125700 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:40:58.125705 | orchestrator | 2026-02-05 00:40:58.125709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.125713 | orchestrator | Thursday 05 February 2026 00:40:52 +0000 (0:00:00.202) 0:00:00.701 ***** 2026-02-05 00:40:58.125716 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-05 00:40:58.125721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-05 00:40:58.125724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-05 00:40:58.125728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-05 00:40:58.125732 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-05 00:40:58.125736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-05 00:40:58.125740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-05 00:40:58.125743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-05 00:40:58.125747 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-05 00:40:58.125751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-05 00:40:58.125759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-05 00:40:58.125763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-05 00:40:58.125767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-05 00:40:58.125771 | orchestrator | 2026-02-05 00:40:58.125775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.125779 | orchestrator | Thursday 05 February 2026 00:40:52 +0000 (0:00:00.392) 0:00:01.094 ***** 2026-02-05 00:40:58.125796 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.125800 | orchestrator | 2026-02-05 00:40:58.125804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.125808 | orchestrator | Thursday 05 February 2026 00:40:52 +0000 (0:00:00.181) 0:00:01.275 ***** 2026-02-05 00:40:58.125812 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.125815 | orchestrator | 2026-02-05 00:40:58.125819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.125823 | orchestrator | Thursday 05 February 2026 00:40:53 +0000 (0:00:00.161) 0:00:01.436 ***** 2026-02-05 00:40:58.125826 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.125830 | orchestrator | 2026-02-05 00:40:58.125834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.125838 | orchestrator | Thursday 05 February 2026 00:40:53 +0000 (0:00:00.168) 0:00:01.605 ***** 2026-02-05 00:40:58.125844 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.125847 | orchestrator | 2026-02-05 00:40:58.125851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.125855 | orchestrator | Thursday 05 February 2026 00:40:53 +0000 (0:00:00.179) 0:00:01.785 ***** 2026-02-05 00:40:58.125859 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.125863 | orchestrator | 2026-02-05 00:40:58.125866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.125870 | orchestrator | Thursday 05 February 2026 00:40:53 +0000 (0:00:00.173) 0:00:01.959 ***** 2026-02-05 00:40:58.125874 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.125878 | orchestrator | 2026-02-05 00:40:58.125882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.125885 | orchestrator | Thursday 05 February 2026 00:40:53 +0000 (0:00:00.183) 0:00:02.142 ***** 2026-02-05 00:40:58.125889 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.125893 | orchestrator | 2026-02-05 00:40:58.125897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.125900 | orchestrator | Thursday 05 February 2026 00:40:54 +0000 (0:00:00.202) 0:00:02.345 ***** 2026-02-05 00:40:58.125904 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.125908 | orchestrator | 2026-02-05 00:40:58.125911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.125915 | orchestrator | Thursday 05 February 2026 00:40:54 +0000 (0:00:00.174) 0:00:02.519 ***** 2026-02-05 00:40:58.125919 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f) 2026-02-05 00:40:58.125924 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f) 2026-02-05 00:40:58.125928 | orchestrator | 2026-02-05 00:40:58.125931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.125944 | orchestrator | Thursday 05 February 2026 00:40:54 +0000 (0:00:00.357) 0:00:02.877 ***** 2026-02-05 00:40:58.125948 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d601120f-cbb3-4953-a30b-917ccea713c0) 2026-02-05 00:40:58.125952 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d601120f-cbb3-4953-a30b-917ccea713c0) 2026-02-05 00:40:58.125956 | orchestrator | 2026-02-05 00:40:58.125959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.125963 | orchestrator | Thursday 05 February 2026 00:40:55 +0000 (0:00:00.507) 0:00:03.384 ***** 2026-02-05 00:40:58.125967 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0f4e2151-cc71-4085-93f0-18395b8a78d9) 2026-02-05 00:40:58.125971 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0f4e2151-cc71-4085-93f0-18395b8a78d9) 2026-02-05 00:40:58.125975 | orchestrator | 2026-02-05 00:40:58.125978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.125982 | orchestrator | Thursday 05 February 2026 00:40:55 +0000 (0:00:00.499) 0:00:03.884 ***** 2026-02-05 00:40:58.125990 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e6da1746-b16d-4279-a6c0-a95c954f705d) 2026-02-05 00:40:58.125993 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e6da1746-b16d-4279-a6c0-a95c954f705d) 2026-02-05 00:40:58.125997 | orchestrator | 2026-02-05 00:40:58.126001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:40:58.126005 | orchestrator | Thursday 05 February 2026 00:40:56 +0000 (0:00:00.611) 0:00:04.495 ***** 2026-02-05 00:40:58.126009 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 00:40:58.126038 | orchestrator | 2026-02-05 00:40:58.126046 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:40:58.126050 | orchestrator | Thursday 05 February 2026 00:40:56 +0000 (0:00:00.298) 0:00:04.794 ***** 2026-02-05 00:40:58.126054 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-05 00:40:58.126057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-05 00:40:58.126061 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-05 00:40:58.126065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-05 00:40:58.126069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-05 00:40:58.126073 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-05 00:40:58.126076 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-05 00:40:58.126080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-05 00:40:58.126084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-05 00:40:58.126087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-05 00:40:58.126091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-05 00:40:58.126095 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-05 00:40:58.126099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-05 00:40:58.126102 | orchestrator | 2026-02-05 00:40:58.126106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:40:58.126110 | orchestrator | Thursday 05 February 2026 00:40:56 +0000 (0:00:00.359) 0:00:05.153 ***** 2026-02-05 00:40:58.126114 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.126117 | orchestrator | 2026-02-05 00:40:58.126121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:40:58.126125 | orchestrator | Thursday 05 February 2026 00:40:57 +0000 (0:00:00.182) 0:00:05.336 ***** 2026-02-05 00:40:58.126128 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.126133 | orchestrator | 2026-02-05 00:40:58.126137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:40:58.126142 | orchestrator | Thursday 05 February 2026 00:40:57 +0000 (0:00:00.178) 0:00:05.515 ***** 2026-02-05 00:40:58.126146 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.126150 | orchestrator | 2026-02-05 00:40:58.126155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:40:58.126160 | orchestrator | Thursday 05 February 2026 00:40:57 +0000 (0:00:00.181) 0:00:05.696 ***** 2026-02-05 00:40:58.126164 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.126168 | orchestrator | 2026-02-05 00:40:58.126173 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:40:58.126177 | orchestrator | Thursday 05 February 2026 00:40:57 +0000 (0:00:00.199) 0:00:05.896 ***** 2026-02-05 00:40:58.126185 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.126189 | orchestrator | 2026-02-05 00:40:58.126194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:40:58.126199 | orchestrator | Thursday 05 February 2026 00:40:57 +0000 (0:00:00.173) 0:00:06.070 ***** 2026-02-05 00:40:58.126203 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.126207 | orchestrator | 2026-02-05 00:40:58.126212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:40:58.126216 | orchestrator | Thursday 05 February 2026 00:40:57 +0000 (0:00:00.180) 0:00:06.250 ***** 2026-02-05 00:40:58.126221 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:58.126225 | orchestrator | 2026-02-05 00:40:58.126233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:04.587177 | orchestrator | Thursday 05 February 2026 00:40:58 +0000 (0:00:00.186) 0:00:06.437 ***** 2026-02-05 00:41:04.587285 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.587303 | orchestrator | 2026-02-05 00:41:04.587316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:04.587328 | orchestrator | Thursday 05 February 2026 00:40:58 +0000 (0:00:00.191) 0:00:06.628 ***** 2026-02-05 00:41:04.587339 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-05 00:41:04.587351 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-05 00:41:04.587362 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-05 00:41:04.587373 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-05 00:41:04.587385 | orchestrator | 2026-02-05 00:41:04.587396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:04.587406 | orchestrator | Thursday 05 February 2026 00:40:59 +0000 (0:00:00.816) 0:00:07.444 ***** 2026-02-05 00:41:04.587417 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.587428 | orchestrator | 2026-02-05 00:41:04.587439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:04.587450 | orchestrator | Thursday 05 February 2026 00:40:59 +0000 (0:00:00.184) 0:00:07.629 ***** 2026-02-05 00:41:04.587461 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.587472 | orchestrator | 2026-02-05 00:41:04.587483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:04.587494 | orchestrator | Thursday 05 February 2026 00:40:59 +0000 (0:00:00.191) 0:00:07.820 ***** 2026-02-05 00:41:04.587505 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.587551 | orchestrator | 2026-02-05 00:41:04.587572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:04.587593 | orchestrator | Thursday 05 February 2026 00:40:59 +0000 (0:00:00.190) 0:00:08.010 ***** 2026-02-05 00:41:04.587612 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.587632 | orchestrator | 2026-02-05 00:41:04.587651 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-05 00:41:04.587669 | orchestrator | Thursday 05 February 2026 00:40:59 +0000 (0:00:00.175) 0:00:08.186 ***** 2026-02-05 00:41:04.587690 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-05 00:41:04.587709 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-05 00:41:04.587727 | orchestrator | 2026-02-05 00:41:04.587774 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-05 00:41:04.587797 | orchestrator | Thursday 05 February 2026 00:41:00 +0000 (0:00:00.141) 0:00:08.327 ***** 2026-02-05 00:41:04.587818 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.587837 | orchestrator | 2026-02-05 00:41:04.587858 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-05 00:41:04.587878 | orchestrator | Thursday 05 February 2026 00:41:00 +0000 (0:00:00.100) 0:00:08.428 ***** 2026-02-05 00:41:04.587892 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.587905 | orchestrator | 2026-02-05 00:41:04.587917 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-05 00:41:04.587931 | orchestrator | Thursday 05 February 2026 00:41:00 +0000 (0:00:00.105) 0:00:08.534 ***** 2026-02-05 00:41:04.587970 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.587984 | orchestrator | 2026-02-05 00:41:04.587997 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-05 00:41:04.588010 | orchestrator | Thursday 05 February 2026 00:41:00 +0000 (0:00:00.116) 0:00:08.650 ***** 2026-02-05 00:41:04.588023 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:41:04.588036 | orchestrator | 2026-02-05 00:41:04.588049 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-05 00:41:04.588062 | orchestrator | Thursday 05 February 2026 00:41:00 +0000 (0:00:00.123) 0:00:08.774 ***** 2026-02-05 00:41:04.588075 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3e842383-5890-511f-b982-bff6d8042060'}}) 2026-02-05 00:41:04.588087 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '22ded513-57d8-573e-a796-c8381d672537'}}) 2026-02-05 00:41:04.588097 | orchestrator | 2026-02-05 00:41:04.588109 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-05 00:41:04.588120 | orchestrator | Thursday 05 February 2026 00:41:00 +0000 (0:00:00.153) 0:00:08.927 ***** 2026-02-05 00:41:04.588131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3e842383-5890-511f-b982-bff6d8042060'}})  2026-02-05 00:41:04.588151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '22ded513-57d8-573e-a796-c8381d672537'}})  2026-02-05 00:41:04.588162 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.588173 | orchestrator | 2026-02-05 00:41:04.588183 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-05 00:41:04.588194 | orchestrator | Thursday 05 February 2026 00:41:00 +0000 (0:00:00.135) 0:00:09.062 ***** 2026-02-05 00:41:04.588205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3e842383-5890-511f-b982-bff6d8042060'}})  2026-02-05 00:41:04.588216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '22ded513-57d8-573e-a796-c8381d672537'}})  2026-02-05 00:41:04.588227 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.588238 | orchestrator | 2026-02-05 00:41:04.588249 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-05 00:41:04.588260 | orchestrator | Thursday 05 February 2026 00:41:00 +0000 (0:00:00.248) 0:00:09.311 ***** 2026-02-05 00:41:04.588270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3e842383-5890-511f-b982-bff6d8042060'}})  2026-02-05 00:41:04.588302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '22ded513-57d8-573e-a796-c8381d672537'}})  2026-02-05 00:41:04.588314 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.588325 | orchestrator | 2026-02-05 00:41:04.588336 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-05 00:41:04.588353 | orchestrator | Thursday 05 February 2026 00:41:01 +0000 (0:00:00.138) 0:00:09.450 ***** 2026-02-05 00:41:04.588364 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:41:04.588375 | orchestrator | 2026-02-05 00:41:04.588386 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-05 00:41:04.588397 | orchestrator | Thursday 05 February 2026 00:41:01 +0000 (0:00:00.116) 0:00:09.567 ***** 2026-02-05 00:41:04.588407 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:41:04.588418 | orchestrator | 2026-02-05 00:41:04.588429 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-05 00:41:04.588439 | orchestrator | Thursday 05 February 2026 00:41:01 +0000 (0:00:00.131) 0:00:09.698 ***** 2026-02-05 00:41:04.588450 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.588460 | orchestrator | 2026-02-05 00:41:04.588471 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-05 00:41:04.588487 | orchestrator | Thursday 05 February 2026 00:41:01 +0000 (0:00:00.126) 0:00:09.824 ***** 2026-02-05 00:41:04.588548 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.588569 | orchestrator | 2026-02-05 00:41:04.588589 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-05 00:41:04.588609 | orchestrator | Thursday 05 February 2026 00:41:01 +0000 (0:00:00.121) 0:00:09.946 ***** 2026-02-05 00:41:04.588629 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.588675 | orchestrator | 2026-02-05 00:41:04.588696 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-05 00:41:04.588708 | orchestrator | Thursday 05 February 2026 00:41:01 +0000 (0:00:00.125) 0:00:10.072 ***** 2026-02-05 00:41:04.588718 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 00:41:04.588729 | orchestrator |  "ceph_osd_devices": { 2026-02-05 00:41:04.588740 | orchestrator |  "sdb": { 2026-02-05 00:41:04.588751 | orchestrator |  "osd_lvm_uuid": "3e842383-5890-511f-b982-bff6d8042060" 2026-02-05 00:41:04.588763 | orchestrator |  }, 2026-02-05 00:41:04.588774 | orchestrator |  "sdc": { 2026-02-05 00:41:04.588785 | orchestrator |  "osd_lvm_uuid": "22ded513-57d8-573e-a796-c8381d672537" 2026-02-05 00:41:04.588795 | orchestrator |  } 2026-02-05 00:41:04.588806 | orchestrator |  } 2026-02-05 00:41:04.588817 | orchestrator | } 2026-02-05 00:41:04.588828 | orchestrator | 2026-02-05 00:41:04.588839 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-05 00:41:04.588850 | orchestrator | Thursday 05 February 2026 00:41:01 +0000 (0:00:00.113) 0:00:10.186 ***** 2026-02-05 00:41:04.588860 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.588871 | orchestrator | 2026-02-05 00:41:04.588882 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-05 00:41:04.588892 | orchestrator | Thursday 05 February 2026 00:41:01 +0000 (0:00:00.119) 0:00:10.305 ***** 2026-02-05 00:41:04.588903 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.588914 | orchestrator | 2026-02-05 00:41:04.588925 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-05 00:41:04.588936 | orchestrator | Thursday 05 February 2026 00:41:02 +0000 (0:00:00.121) 0:00:10.427 ***** 2026-02-05 00:41:04.588946 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:04.588957 | orchestrator | 2026-02-05 00:41:04.588968 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-05 00:41:04.588978 | orchestrator | Thursday 05 February 2026 00:41:02 +0000 (0:00:00.121) 0:00:10.549 ***** 2026-02-05 00:41:04.588989 | orchestrator | changed: [testbed-node-3] => { 2026-02-05 00:41:04.588999 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-05 00:41:04.589010 | orchestrator |  "ceph_osd_devices": { 2026-02-05 00:41:04.589021 | orchestrator |  "sdb": { 2026-02-05 00:41:04.589032 | orchestrator |  "osd_lvm_uuid": "3e842383-5890-511f-b982-bff6d8042060" 2026-02-05 00:41:04.589106 | orchestrator |  }, 2026-02-05 00:41:04.589118 | orchestrator |  "sdc": { 2026-02-05 00:41:04.589144 | orchestrator |  "osd_lvm_uuid": "22ded513-57d8-573e-a796-c8381d672537" 2026-02-05 00:41:04.589156 | orchestrator |  } 2026-02-05 00:41:04.589167 | orchestrator |  }, 2026-02-05 00:41:04.589178 | orchestrator |  "lvm_volumes": [ 2026-02-05 00:41:04.589189 | orchestrator |  { 2026-02-05 00:41:04.589200 | orchestrator |  "data": "osd-block-3e842383-5890-511f-b982-bff6d8042060", 2026-02-05 00:41:04.589211 | orchestrator |  "data_vg": "ceph-3e842383-5890-511f-b982-bff6d8042060" 2026-02-05 00:41:04.589222 | orchestrator |  }, 2026-02-05 00:41:04.589233 | orchestrator |  { 2026-02-05 00:41:04.589243 | orchestrator |  "data": "osd-block-22ded513-57d8-573e-a796-c8381d672537", 2026-02-05 00:41:04.589254 | orchestrator |  "data_vg": "ceph-22ded513-57d8-573e-a796-c8381d672537" 2026-02-05 00:41:04.589274 | orchestrator |  } 2026-02-05 00:41:04.589285 | orchestrator |  ] 2026-02-05 00:41:04.589296 | orchestrator |  } 2026-02-05 00:41:04.589307 | orchestrator | } 2026-02-05 00:41:04.589328 | orchestrator | 2026-02-05 00:41:04.589339 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-05 00:41:04.589350 | orchestrator | Thursday 05 February 2026 00:41:02 +0000 (0:00:00.298) 0:00:10.847 ***** 2026-02-05 00:41:04.589361 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 00:41:04.589372 | orchestrator | 2026-02-05 00:41:04.589382 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-05 00:41:04.589393 | orchestrator | 2026-02-05 00:41:04.589404 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 00:41:04.589415 | orchestrator | Thursday 05 February 2026 00:41:04 +0000 (0:00:01.602) 0:00:12.450 ***** 2026-02-05 00:41:04.589425 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-05 00:41:04.589436 | orchestrator | 2026-02-05 00:41:04.589447 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 00:41:04.589458 | orchestrator | Thursday 05 February 2026 00:41:04 +0000 (0:00:00.241) 0:00:12.691 ***** 2026-02-05 00:41:04.589469 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:41:04.589480 | orchestrator | 2026-02-05 00:41:04.589500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.441213 | orchestrator | Thursday 05 February 2026 00:41:04 +0000 (0:00:00.206) 0:00:12.898 ***** 2026-02-05 00:41:11.441352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-05 00:41:11.441372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-05 00:41:11.441384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-05 00:41:11.441396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-05 00:41:11.441407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-05 00:41:11.441418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-05 00:41:11.441429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-05 00:41:11.441441 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-05 00:41:11.441452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-05 00:41:11.441463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-05 00:41:11.441474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-05 00:41:11.441484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-05 00:41:11.441501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-05 00:41:11.441513 | orchestrator | 2026-02-05 00:41:11.441599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.441611 | orchestrator | Thursday 05 February 2026 00:41:04 +0000 (0:00:00.315) 0:00:13.213 ***** 2026-02-05 00:41:11.441622 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.441634 | orchestrator | 2026-02-05 00:41:11.441645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.441656 | orchestrator | Thursday 05 February 2026 00:41:05 +0000 (0:00:00.167) 0:00:13.381 ***** 2026-02-05 00:41:11.441666 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.441677 | orchestrator | 2026-02-05 00:41:11.441688 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.441699 | orchestrator | Thursday 05 February 2026 00:41:05 +0000 (0:00:00.185) 0:00:13.566 ***** 2026-02-05 00:41:11.441710 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.441720 | orchestrator | 2026-02-05 00:41:11.441735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.441748 | orchestrator | Thursday 05 February 2026 00:41:05 +0000 (0:00:00.178) 0:00:13.744 ***** 2026-02-05 00:41:11.441788 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.441801 | orchestrator | 2026-02-05 00:41:11.441814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.441828 | orchestrator | Thursday 05 February 2026 00:41:05 +0000 (0:00:00.179) 0:00:13.923 ***** 2026-02-05 00:41:11.441840 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.441853 | orchestrator | 2026-02-05 00:41:11.441865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.441878 | orchestrator | Thursday 05 February 2026 00:41:06 +0000 (0:00:00.504) 0:00:14.428 ***** 2026-02-05 00:41:11.441892 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.441904 | orchestrator | 2026-02-05 00:41:11.441935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.441948 | orchestrator | Thursday 05 February 2026 00:41:06 +0000 (0:00:00.182) 0:00:14.611 ***** 2026-02-05 00:41:11.441967 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.441987 | orchestrator | 2026-02-05 00:41:11.442006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.442122 | orchestrator | Thursday 05 February 2026 00:41:06 +0000 (0:00:00.169) 0:00:14.780 ***** 2026-02-05 00:41:11.442143 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.442162 | orchestrator | 2026-02-05 00:41:11.442181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.442193 | orchestrator | Thursday 05 February 2026 00:41:06 +0000 (0:00:00.181) 0:00:14.962 ***** 2026-02-05 00:41:11.442203 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3) 2026-02-05 00:41:11.442230 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3) 2026-02-05 00:41:11.442242 | orchestrator | 2026-02-05 00:41:11.442252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.442274 | orchestrator | Thursday 05 February 2026 00:41:07 +0000 (0:00:00.398) 0:00:15.361 ***** 2026-02-05 00:41:11.442286 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c8222ed3-0da2-4bb4-b170-21b6f36ecb8d) 2026-02-05 00:41:11.442297 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c8222ed3-0da2-4bb4-b170-21b6f36ecb8d) 2026-02-05 00:41:11.442308 | orchestrator | 2026-02-05 00:41:11.442318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.442329 | orchestrator | Thursday 05 February 2026 00:41:07 +0000 (0:00:00.363) 0:00:15.725 ***** 2026-02-05 00:41:11.442340 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b7f472c8-b527-47c9-ac56-62f6f3e84fbf) 2026-02-05 00:41:11.442351 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b7f472c8-b527-47c9-ac56-62f6f3e84fbf) 2026-02-05 00:41:11.442362 | orchestrator | 2026-02-05 00:41:11.442373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.442403 | orchestrator | Thursday 05 February 2026 00:41:07 +0000 (0:00:00.379) 0:00:16.104 ***** 2026-02-05 00:41:11.442415 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a3293e5b-f1f9-462e-9781-4b1b679aef30) 2026-02-05 00:41:11.442426 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a3293e5b-f1f9-462e-9781-4b1b679aef30) 2026-02-05 00:41:11.442437 | orchestrator | 2026-02-05 00:41:11.442447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:11.442458 | orchestrator | Thursday 05 February 2026 00:41:08 +0000 (0:00:00.426) 0:00:16.530 ***** 2026-02-05 00:41:11.442469 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 00:41:11.442480 | orchestrator | 2026-02-05 00:41:11.442490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:11.442501 | orchestrator | Thursday 05 February 2026 00:41:08 +0000 (0:00:00.297) 0:00:16.828 ***** 2026-02-05 00:41:11.442512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-05 00:41:11.442565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-05 00:41:11.442577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-05 00:41:11.442588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-05 00:41:11.442598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-05 00:41:11.442609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-05 00:41:11.442620 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-05 00:41:11.442630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-05 00:41:11.442641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-05 00:41:11.442652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-05 00:41:11.442662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-05 00:41:11.442673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-05 00:41:11.442684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-05 00:41:11.442694 | orchestrator | 2026-02-05 00:41:11.442705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:11.442716 | orchestrator | Thursday 05 February 2026 00:41:08 +0000 (0:00:00.323) 0:00:17.151 ***** 2026-02-05 00:41:11.442727 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.442737 | orchestrator | 2026-02-05 00:41:11.442748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:11.442768 | orchestrator | Thursday 05 February 2026 00:41:09 +0000 (0:00:00.457) 0:00:17.609 ***** 2026-02-05 00:41:11.442779 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.442789 | orchestrator | 2026-02-05 00:41:11.442800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:11.442811 | orchestrator | Thursday 05 February 2026 00:41:09 +0000 (0:00:00.179) 0:00:17.788 ***** 2026-02-05 00:41:11.442822 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.442832 | orchestrator | 2026-02-05 00:41:11.442843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:11.442854 | orchestrator | Thursday 05 February 2026 00:41:09 +0000 (0:00:00.186) 0:00:17.975 ***** 2026-02-05 00:41:11.442865 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.442875 | orchestrator | 2026-02-05 00:41:11.442886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:11.442897 | orchestrator | Thursday 05 February 2026 00:41:09 +0000 (0:00:00.184) 0:00:18.159 ***** 2026-02-05 00:41:11.442908 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.442918 | orchestrator | 2026-02-05 00:41:11.442929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:11.442940 | orchestrator | Thursday 05 February 2026 00:41:10 +0000 (0:00:00.194) 0:00:18.353 ***** 2026-02-05 00:41:11.442950 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.442961 | orchestrator | 2026-02-05 00:41:11.442972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:11.442983 | orchestrator | Thursday 05 February 2026 00:41:10 +0000 (0:00:00.189) 0:00:18.543 ***** 2026-02-05 00:41:11.442993 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.443004 | orchestrator | 2026-02-05 00:41:11.443015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:11.443025 | orchestrator | Thursday 05 February 2026 00:41:10 +0000 (0:00:00.177) 0:00:18.720 ***** 2026-02-05 00:41:11.443036 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:11.443053 | orchestrator | 2026-02-05 00:41:11.443064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:11.443074 | orchestrator | Thursday 05 February 2026 00:41:10 +0000 (0:00:00.163) 0:00:18.884 ***** 2026-02-05 00:41:11.443085 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-05 00:41:11.443096 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-05 00:41:11.443108 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-05 00:41:11.443118 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-05 00:41:11.443129 | orchestrator | 2026-02-05 00:41:11.443140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:11.443151 | orchestrator | Thursday 05 February 2026 00:41:11 +0000 (0:00:00.698) 0:00:19.582 ***** 2026-02-05 00:41:11.443161 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.972709 | orchestrator | 2026-02-05 00:41:16.972815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:16.972833 | orchestrator | Thursday 05 February 2026 00:41:11 +0000 (0:00:00.170) 0:00:19.752 ***** 2026-02-05 00:41:16.972846 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.972858 | orchestrator | 2026-02-05 00:41:16.972870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:16.972882 | orchestrator | Thursday 05 February 2026 00:41:11 +0000 (0:00:00.170) 0:00:19.923 ***** 2026-02-05 00:41:16.972893 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.972904 | orchestrator | 2026-02-05 00:41:16.972915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:16.972927 | orchestrator | Thursday 05 February 2026 00:41:11 +0000 (0:00:00.182) 0:00:20.106 ***** 2026-02-05 00:41:16.972938 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.972949 | orchestrator | 2026-02-05 00:41:16.972960 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-05 00:41:16.972971 | orchestrator | Thursday 05 February 2026 00:41:12 +0000 (0:00:00.520) 0:00:20.627 ***** 2026-02-05 00:41:16.972982 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-05 00:41:16.972993 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-05 00:41:16.973004 | orchestrator | 2026-02-05 00:41:16.973015 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-05 00:41:16.973026 | orchestrator | Thursday 05 February 2026 00:41:12 +0000 (0:00:00.146) 0:00:20.773 ***** 2026-02-05 00:41:16.973037 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.973048 | orchestrator | 2026-02-05 00:41:16.973060 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-05 00:41:16.973071 | orchestrator | Thursday 05 February 2026 00:41:12 +0000 (0:00:00.130) 0:00:20.904 ***** 2026-02-05 00:41:16.973082 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.973093 | orchestrator | 2026-02-05 00:41:16.973104 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-05 00:41:16.973115 | orchestrator | Thursday 05 February 2026 00:41:12 +0000 (0:00:00.122) 0:00:21.026 ***** 2026-02-05 00:41:16.973126 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.973137 | orchestrator | 2026-02-05 00:41:16.973148 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-05 00:41:16.973159 | orchestrator | Thursday 05 February 2026 00:41:12 +0000 (0:00:00.129) 0:00:21.156 ***** 2026-02-05 00:41:16.973170 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:41:16.973182 | orchestrator | 2026-02-05 00:41:16.973193 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-05 00:41:16.973204 | orchestrator | Thursday 05 February 2026 00:41:12 +0000 (0:00:00.116) 0:00:21.272 ***** 2026-02-05 00:41:16.973216 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '159372f8-6c52-51f3-a9af-3fbf7ffb45fe'}}) 2026-02-05 00:41:16.973228 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '523b4628-8322-5ebe-8cc3-60a2eeaa41a5'}}) 2026-02-05 00:41:16.973263 | orchestrator | 2026-02-05 00:41:16.973274 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-05 00:41:16.973285 | orchestrator | Thursday 05 February 2026 00:41:13 +0000 (0:00:00.162) 0:00:21.435 ***** 2026-02-05 00:41:16.973296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '159372f8-6c52-51f3-a9af-3fbf7ffb45fe'}})  2026-02-05 00:41:16.973327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '523b4628-8322-5ebe-8cc3-60a2eeaa41a5'}})  2026-02-05 00:41:16.973338 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.973349 | orchestrator | 2026-02-05 00:41:16.973360 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-05 00:41:16.973371 | orchestrator | Thursday 05 February 2026 00:41:13 +0000 (0:00:00.115) 0:00:21.550 ***** 2026-02-05 00:41:16.973382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '159372f8-6c52-51f3-a9af-3fbf7ffb45fe'}})  2026-02-05 00:41:16.973393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '523b4628-8322-5ebe-8cc3-60a2eeaa41a5'}})  2026-02-05 00:41:16.973404 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.973415 | orchestrator | 2026-02-05 00:41:16.973426 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-05 00:41:16.973436 | orchestrator | Thursday 05 February 2026 00:41:13 +0000 (0:00:00.140) 0:00:21.690 ***** 2026-02-05 00:41:16.973447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '159372f8-6c52-51f3-a9af-3fbf7ffb45fe'}})  2026-02-05 00:41:16.973459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '523b4628-8322-5ebe-8cc3-60a2eeaa41a5'}})  2026-02-05 00:41:16.973470 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.973481 | orchestrator | 2026-02-05 00:41:16.973491 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-05 00:41:16.973502 | orchestrator | Thursday 05 February 2026 00:41:13 +0000 (0:00:00.136) 0:00:21.827 ***** 2026-02-05 00:41:16.973537 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:41:16.973557 | orchestrator | 2026-02-05 00:41:16.973568 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-05 00:41:16.973579 | orchestrator | Thursday 05 February 2026 00:41:13 +0000 (0:00:00.137) 0:00:21.965 ***** 2026-02-05 00:41:16.973590 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:41:16.973601 | orchestrator | 2026-02-05 00:41:16.973612 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-05 00:41:16.973623 | orchestrator | Thursday 05 February 2026 00:41:13 +0000 (0:00:00.118) 0:00:22.083 ***** 2026-02-05 00:41:16.973651 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.973663 | orchestrator | 2026-02-05 00:41:16.973674 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-05 00:41:16.973685 | orchestrator | Thursday 05 February 2026 00:41:14 +0000 (0:00:00.297) 0:00:22.381 ***** 2026-02-05 00:41:16.973695 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.973706 | orchestrator | 2026-02-05 00:41:16.973717 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-05 00:41:16.973742 | orchestrator | Thursday 05 February 2026 00:41:14 +0000 (0:00:00.131) 0:00:22.512 ***** 2026-02-05 00:41:16.973753 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.973764 | orchestrator | 2026-02-05 00:41:16.973775 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-05 00:41:16.973786 | orchestrator | Thursday 05 February 2026 00:41:14 +0000 (0:00:00.121) 0:00:22.634 ***** 2026-02-05 00:41:16.973796 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 00:41:16.973807 | orchestrator |  "ceph_osd_devices": { 2026-02-05 00:41:16.973818 | orchestrator |  "sdb": { 2026-02-05 00:41:16.973831 | orchestrator |  "osd_lvm_uuid": "159372f8-6c52-51f3-a9af-3fbf7ffb45fe" 2026-02-05 00:41:16.973842 | orchestrator |  }, 2026-02-05 00:41:16.973863 | orchestrator |  "sdc": { 2026-02-05 00:41:16.973874 | orchestrator |  "osd_lvm_uuid": "523b4628-8322-5ebe-8cc3-60a2eeaa41a5" 2026-02-05 00:41:16.973885 | orchestrator |  } 2026-02-05 00:41:16.973896 | orchestrator |  } 2026-02-05 00:41:16.973907 | orchestrator | } 2026-02-05 00:41:16.973918 | orchestrator | 2026-02-05 00:41:16.973929 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-05 00:41:16.973940 | orchestrator | Thursday 05 February 2026 00:41:14 +0000 (0:00:00.205) 0:00:22.839 ***** 2026-02-05 00:41:16.973950 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.973961 | orchestrator | 2026-02-05 00:41:16.973972 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-05 00:41:16.973983 | orchestrator | Thursday 05 February 2026 00:41:14 +0000 (0:00:00.104) 0:00:22.944 ***** 2026-02-05 00:41:16.973993 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.974004 | orchestrator | 2026-02-05 00:41:16.974065 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-05 00:41:16.974080 | orchestrator | Thursday 05 February 2026 00:41:14 +0000 (0:00:00.127) 0:00:23.071 ***** 2026-02-05 00:41:16.974091 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:16.974102 | orchestrator | 2026-02-05 00:41:16.974113 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-05 00:41:16.974124 | orchestrator | Thursday 05 February 2026 00:41:14 +0000 (0:00:00.116) 0:00:23.188 ***** 2026-02-05 00:41:16.974135 | orchestrator | changed: [testbed-node-4] => { 2026-02-05 00:41:16.974146 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-05 00:41:16.974157 | orchestrator |  "ceph_osd_devices": { 2026-02-05 00:41:16.974168 | orchestrator |  "sdb": { 2026-02-05 00:41:16.974179 | orchestrator |  "osd_lvm_uuid": "159372f8-6c52-51f3-a9af-3fbf7ffb45fe" 2026-02-05 00:41:16.974190 | orchestrator |  }, 2026-02-05 00:41:16.974201 | orchestrator |  "sdc": { 2026-02-05 00:41:16.974212 | orchestrator |  "osd_lvm_uuid": "523b4628-8322-5ebe-8cc3-60a2eeaa41a5" 2026-02-05 00:41:16.974223 | orchestrator |  } 2026-02-05 00:41:16.974233 | orchestrator |  }, 2026-02-05 00:41:16.974244 | orchestrator |  "lvm_volumes": [ 2026-02-05 00:41:16.974255 | orchestrator |  { 2026-02-05 00:41:16.974266 | orchestrator |  "data": "osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe", 2026-02-05 00:41:16.974277 | orchestrator |  "data_vg": "ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe" 2026-02-05 00:41:16.974288 | orchestrator |  }, 2026-02-05 00:41:16.974299 | orchestrator |  { 2026-02-05 00:41:16.974309 | orchestrator |  "data": "osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5", 2026-02-05 00:41:16.974320 | orchestrator |  "data_vg": "ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5" 2026-02-05 00:41:16.974331 | orchestrator |  } 2026-02-05 00:41:16.974341 | orchestrator |  ] 2026-02-05 00:41:16.974352 | orchestrator |  } 2026-02-05 00:41:16.974363 | orchestrator | } 2026-02-05 00:41:16.974374 | orchestrator | 2026-02-05 00:41:16.974385 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-05 00:41:16.974396 | orchestrator | Thursday 05 February 2026 00:41:15 +0000 (0:00:00.178) 0:00:23.366 ***** 2026-02-05 00:41:16.974407 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-05 00:41:16.974417 | orchestrator | 2026-02-05 00:41:16.974428 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-05 00:41:16.974439 | orchestrator | 2026-02-05 00:41:16.974449 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 00:41:16.974460 | orchestrator | Thursday 05 February 2026 00:41:15 +0000 (0:00:00.896) 0:00:24.263 ***** 2026-02-05 00:41:16.974471 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-05 00:41:16.974482 | orchestrator | 2026-02-05 00:41:16.974492 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 00:41:16.974537 | orchestrator | Thursday 05 February 2026 00:41:16 +0000 (0:00:00.499) 0:00:24.762 ***** 2026-02-05 00:41:16.974550 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:41:16.974561 | orchestrator | 2026-02-05 00:41:16.974572 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:16.974583 | orchestrator | Thursday 05 February 2026 00:41:16 +0000 (0:00:00.195) 0:00:24.957 ***** 2026-02-05 00:41:16.974594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-05 00:41:16.974605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-05 00:41:16.974616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-05 00:41:16.974627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-05 00:41:16.974638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-05 00:41:16.974656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-05 00:41:23.649786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-05 00:41:23.649873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-05 00:41:23.649884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-05 00:41:23.649892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-05 00:41:23.649899 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-05 00:41:23.649906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-05 00:41:23.649913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-05 00:41:23.649920 | orchestrator | 2026-02-05 00:41:23.649928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:23.649936 | orchestrator | Thursday 05 February 2026 00:41:16 +0000 (0:00:00.326) 0:00:25.284 ***** 2026-02-05 00:41:23.649943 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.649951 | orchestrator | 2026-02-05 00:41:23.649957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:23.649964 | orchestrator | Thursday 05 February 2026 00:41:17 +0000 (0:00:00.162) 0:00:25.447 ***** 2026-02-05 00:41:23.649971 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.649977 | orchestrator | 2026-02-05 00:41:23.649984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:23.649991 | orchestrator | Thursday 05 February 2026 00:41:17 +0000 (0:00:00.173) 0:00:25.621 ***** 2026-02-05 00:41:23.649997 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650004 | orchestrator | 2026-02-05 00:41:23.650011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:23.650061 | orchestrator | Thursday 05 February 2026 00:41:17 +0000 (0:00:00.169) 0:00:25.790 ***** 2026-02-05 00:41:23.650068 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650074 | orchestrator | 2026-02-05 00:41:23.650081 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:23.650088 | orchestrator | Thursday 05 February 2026 00:41:17 +0000 (0:00:00.161) 0:00:25.952 ***** 2026-02-05 00:41:23.650095 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650101 | orchestrator | 2026-02-05 00:41:23.650108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:23.650115 | orchestrator | Thursday 05 February 2026 00:41:17 +0000 (0:00:00.153) 0:00:26.105 ***** 2026-02-05 00:41:23.650121 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650128 | orchestrator | 2026-02-05 00:41:23.650136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:23.650142 | orchestrator | Thursday 05 February 2026 00:41:17 +0000 (0:00:00.139) 0:00:26.245 ***** 2026-02-05 00:41:23.650169 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650176 | orchestrator | 2026-02-05 00:41:23.650183 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:23.650191 | orchestrator | Thursday 05 February 2026 00:41:18 +0000 (0:00:00.144) 0:00:26.390 ***** 2026-02-05 00:41:23.650197 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650204 | orchestrator | 2026-02-05 00:41:23.650211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:23.650218 | orchestrator | Thursday 05 February 2026 00:41:18 +0000 (0:00:00.140) 0:00:26.530 ***** 2026-02-05 00:41:23.650225 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa) 2026-02-05 00:41:23.650233 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa) 2026-02-05 00:41:23.650239 | orchestrator | 2026-02-05 00:41:23.650246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:23.650252 | orchestrator | Thursday 05 February 2026 00:41:18 +0000 (0:00:00.590) 0:00:27.121 ***** 2026-02-05 00:41:23.650259 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9acd2af8-1818-4377-bd1d-628102e352cb) 2026-02-05 00:41:23.650266 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9acd2af8-1818-4377-bd1d-628102e352cb) 2026-02-05 00:41:23.650273 | orchestrator | 2026-02-05 00:41:23.650279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:23.650286 | orchestrator | Thursday 05 February 2026 00:41:19 +0000 (0:00:00.330) 0:00:27.452 ***** 2026-02-05 00:41:23.650293 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_33f37d33-b22b-44c3-8624-6074b4bf08c3) 2026-02-05 00:41:23.650299 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_33f37d33-b22b-44c3-8624-6074b4bf08c3) 2026-02-05 00:41:23.650306 | orchestrator | 2026-02-05 00:41:23.650313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:23.650319 | orchestrator | Thursday 05 February 2026 00:41:19 +0000 (0:00:00.416) 0:00:27.868 ***** 2026-02-05 00:41:23.650326 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7f67b6e9-f99c-4354-902d-31e3a3988722) 2026-02-05 00:41:23.650333 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7f67b6e9-f99c-4354-902d-31e3a3988722) 2026-02-05 00:41:23.650339 | orchestrator | 2026-02-05 00:41:23.650347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:41:23.650355 | orchestrator | Thursday 05 February 2026 00:41:19 +0000 (0:00:00.387) 0:00:28.256 ***** 2026-02-05 00:41:23.650363 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 00:41:23.650371 | orchestrator | 2026-02-05 00:41:23.650378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650399 | orchestrator | Thursday 05 February 2026 00:41:20 +0000 (0:00:00.296) 0:00:28.552 ***** 2026-02-05 00:41:23.650407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-05 00:41:23.650415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-05 00:41:23.650423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-05 00:41:23.650431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-05 00:41:23.650438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-05 00:41:23.650460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-05 00:41:23.650468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-05 00:41:23.650476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-05 00:41:23.650488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-05 00:41:23.650495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-05 00:41:23.650501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-05 00:41:23.650527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-05 00:41:23.650535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-05 00:41:23.650542 | orchestrator | 2026-02-05 00:41:23.650548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650555 | orchestrator | Thursday 05 February 2026 00:41:20 +0000 (0:00:00.440) 0:00:28.992 ***** 2026-02-05 00:41:23.650561 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650568 | orchestrator | 2026-02-05 00:41:23.650575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650581 | orchestrator | Thursday 05 February 2026 00:41:20 +0000 (0:00:00.185) 0:00:29.177 ***** 2026-02-05 00:41:23.650588 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650594 | orchestrator | 2026-02-05 00:41:23.650601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650607 | orchestrator | Thursday 05 February 2026 00:41:21 +0000 (0:00:00.181) 0:00:29.359 ***** 2026-02-05 00:41:23.650618 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650625 | orchestrator | 2026-02-05 00:41:23.650632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650638 | orchestrator | Thursday 05 February 2026 00:41:21 +0000 (0:00:00.146) 0:00:29.506 ***** 2026-02-05 00:41:23.650650 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650661 | orchestrator | 2026-02-05 00:41:23.650671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650682 | orchestrator | Thursday 05 February 2026 00:41:21 +0000 (0:00:00.169) 0:00:29.675 ***** 2026-02-05 00:41:23.650693 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650702 | orchestrator | 2026-02-05 00:41:23.650712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650723 | orchestrator | Thursday 05 February 2026 00:41:21 +0000 (0:00:00.167) 0:00:29.842 ***** 2026-02-05 00:41:23.650733 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650745 | orchestrator | 2026-02-05 00:41:23.650755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650767 | orchestrator | Thursday 05 February 2026 00:41:22 +0000 (0:00:00.474) 0:00:30.317 ***** 2026-02-05 00:41:23.650777 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650788 | orchestrator | 2026-02-05 00:41:23.650795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650801 | orchestrator | Thursday 05 February 2026 00:41:22 +0000 (0:00:00.151) 0:00:30.469 ***** 2026-02-05 00:41:23.650808 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650815 | orchestrator | 2026-02-05 00:41:23.650821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650828 | orchestrator | Thursday 05 February 2026 00:41:22 +0000 (0:00:00.182) 0:00:30.652 ***** 2026-02-05 00:41:23.650835 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-05 00:41:23.650842 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-05 00:41:23.650849 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-05 00:41:23.650855 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-05 00:41:23.650862 | orchestrator | 2026-02-05 00:41:23.650869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650875 | orchestrator | Thursday 05 February 2026 00:41:22 +0000 (0:00:00.577) 0:00:31.229 ***** 2026-02-05 00:41:23.650882 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650889 | orchestrator | 2026-02-05 00:41:23.650901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650908 | orchestrator | Thursday 05 February 2026 00:41:23 +0000 (0:00:00.176) 0:00:31.405 ***** 2026-02-05 00:41:23.650915 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650921 | orchestrator | 2026-02-05 00:41:23.650928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650935 | orchestrator | Thursday 05 February 2026 00:41:23 +0000 (0:00:00.187) 0:00:31.593 ***** 2026-02-05 00:41:23.650941 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650948 | orchestrator | 2026-02-05 00:41:23.650955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:41:23.650961 | orchestrator | Thursday 05 February 2026 00:41:23 +0000 (0:00:00.177) 0:00:31.771 ***** 2026-02-05 00:41:23.650968 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:23.650975 | orchestrator | 2026-02-05 00:41:23.650988 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-05 00:41:27.390300 | orchestrator | Thursday 05 February 2026 00:41:23 +0000 (0:00:00.184) 0:00:31.956 ***** 2026-02-05 00:41:27.390383 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-05 00:41:27.390391 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-05 00:41:27.390396 | orchestrator | 2026-02-05 00:41:27.390400 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-05 00:41:27.390405 | orchestrator | Thursday 05 February 2026 00:41:23 +0000 (0:00:00.154) 0:00:32.111 ***** 2026-02-05 00:41:27.390409 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:27.390414 | orchestrator | 2026-02-05 00:41:27.390417 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-05 00:41:27.390421 | orchestrator | Thursday 05 February 2026 00:41:23 +0000 (0:00:00.120) 0:00:32.231 ***** 2026-02-05 00:41:27.390425 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:27.390429 | orchestrator | 2026-02-05 00:41:27.390432 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-05 00:41:27.390436 | orchestrator | Thursday 05 February 2026 00:41:24 +0000 (0:00:00.127) 0:00:32.358 ***** 2026-02-05 00:41:27.390440 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:27.390443 | orchestrator | 2026-02-05 00:41:27.390447 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-05 00:41:27.390451 | orchestrator | Thursday 05 February 2026 00:41:24 +0000 (0:00:00.238) 0:00:32.597 ***** 2026-02-05 00:41:27.390455 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:41:27.390459 | orchestrator | 2026-02-05 00:41:27.390463 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-05 00:41:27.390467 | orchestrator | Thursday 05 February 2026 00:41:24 +0000 (0:00:00.121) 0:00:32.718 ***** 2026-02-05 00:41:27.390472 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3edfc207-63bb-5e8f-b635-306c655bc02c'}}) 2026-02-05 00:41:27.390476 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '121c279b-9e45-54e8-9359-e1d452607edd'}}) 2026-02-05 00:41:27.390480 | orchestrator | 2026-02-05 00:41:27.390483 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-05 00:41:27.390487 | orchestrator | Thursday 05 February 2026 00:41:24 +0000 (0:00:00.139) 0:00:32.857 ***** 2026-02-05 00:41:27.390492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3edfc207-63bb-5e8f-b635-306c655bc02c'}})  2026-02-05 00:41:27.390498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '121c279b-9e45-54e8-9359-e1d452607edd'}})  2026-02-05 00:41:27.390502 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:27.390505 | orchestrator | 2026-02-05 00:41:27.390561 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-05 00:41:27.390565 | orchestrator | Thursday 05 February 2026 00:41:24 +0000 (0:00:00.143) 0:00:33.000 ***** 2026-02-05 00:41:27.390569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3edfc207-63bb-5e8f-b635-306c655bc02c'}})  2026-02-05 00:41:27.390589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '121c279b-9e45-54e8-9359-e1d452607edd'}})  2026-02-05 00:41:27.390594 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:27.390598 | orchestrator | 2026-02-05 00:41:27.390601 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-05 00:41:27.390605 | orchestrator | Thursday 05 February 2026 00:41:24 +0000 (0:00:00.121) 0:00:33.122 ***** 2026-02-05 00:41:27.390620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3edfc207-63bb-5e8f-b635-306c655bc02c'}})  2026-02-05 00:41:27.390624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '121c279b-9e45-54e8-9359-e1d452607edd'}})  2026-02-05 00:41:27.390628 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:27.390632 | orchestrator | 2026-02-05 00:41:27.390636 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-05 00:41:27.390639 | orchestrator | Thursday 05 February 2026 00:41:24 +0000 (0:00:00.143) 0:00:33.266 ***** 2026-02-05 00:41:27.390643 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:41:27.390647 | orchestrator | 2026-02-05 00:41:27.390651 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-05 00:41:27.390655 | orchestrator | Thursday 05 February 2026 00:41:25 +0000 (0:00:00.129) 0:00:33.396 ***** 2026-02-05 00:41:27.390658 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:41:27.390662 | orchestrator | 2026-02-05 00:41:27.390666 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-05 00:41:27.390670 | orchestrator | Thursday 05 February 2026 00:41:25 +0000 (0:00:00.118) 0:00:33.514 ***** 2026-02-05 00:41:27.390674 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:27.390677 | orchestrator | 2026-02-05 00:41:27.390681 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-05 00:41:27.390685 | orchestrator | Thursday 05 February 2026 00:41:25 +0000 (0:00:00.104) 0:00:33.619 ***** 2026-02-05 00:41:27.390689 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:27.390692 | orchestrator | 2026-02-05 00:41:27.390696 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-05 00:41:27.390700 | orchestrator | Thursday 05 February 2026 00:41:25 +0000 (0:00:00.102) 0:00:33.721 ***** 2026-02-05 00:41:27.390704 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:27.390707 | orchestrator | 2026-02-05 00:41:27.390711 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-05 00:41:27.390715 | orchestrator | Thursday 05 February 2026 00:41:25 +0000 (0:00:00.127) 0:00:33.848 ***** 2026-02-05 00:41:27.390719 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 00:41:27.390722 | orchestrator |  "ceph_osd_devices": { 2026-02-05 00:41:27.390727 | orchestrator |  "sdb": { 2026-02-05 00:41:27.390744 | orchestrator |  "osd_lvm_uuid": "3edfc207-63bb-5e8f-b635-306c655bc02c" 2026-02-05 00:41:27.390748 | orchestrator |  }, 2026-02-05 00:41:27.390752 | orchestrator |  "sdc": { 2026-02-05 00:41:27.390756 | orchestrator |  "osd_lvm_uuid": "121c279b-9e45-54e8-9359-e1d452607edd" 2026-02-05 00:41:27.390760 | orchestrator |  } 2026-02-05 00:41:27.390764 | orchestrator |  } 2026-02-05 00:41:27.390768 | orchestrator | } 2026-02-05 00:41:27.390772 | orchestrator | 2026-02-05 00:41:27.390776 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-05 00:41:27.390780 | orchestrator | Thursday 05 February 2026 00:41:25 +0000 (0:00:00.142) 0:00:33.991 ***** 2026-02-05 00:41:27.390783 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:27.390787 | orchestrator | 2026-02-05 00:41:27.390791 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-05 00:41:27.390795 | orchestrator | Thursday 05 February 2026 00:41:26 +0000 (0:00:00.330) 0:00:34.321 ***** 2026-02-05 00:41:27.390799 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:27.390813 | orchestrator | 2026-02-05 00:41:27.390817 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-05 00:41:27.390821 | orchestrator | Thursday 05 February 2026 00:41:26 +0000 (0:00:00.153) 0:00:34.475 ***** 2026-02-05 00:41:27.390825 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:27.390828 | orchestrator | 2026-02-05 00:41:27.390832 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-05 00:41:27.390836 | orchestrator | Thursday 05 February 2026 00:41:26 +0000 (0:00:00.110) 0:00:34.586 ***** 2026-02-05 00:41:27.390840 | orchestrator | changed: [testbed-node-5] => { 2026-02-05 00:41:27.390843 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-05 00:41:27.390847 | orchestrator |  "ceph_osd_devices": { 2026-02-05 00:41:27.390851 | orchestrator |  "sdb": { 2026-02-05 00:41:27.390855 | orchestrator |  "osd_lvm_uuid": "3edfc207-63bb-5e8f-b635-306c655bc02c" 2026-02-05 00:41:27.390859 | orchestrator |  }, 2026-02-05 00:41:27.390863 | orchestrator |  "sdc": { 2026-02-05 00:41:27.390868 | orchestrator |  "osd_lvm_uuid": "121c279b-9e45-54e8-9359-e1d452607edd" 2026-02-05 00:41:27.390872 | orchestrator |  } 2026-02-05 00:41:27.390877 | orchestrator |  }, 2026-02-05 00:41:27.390881 | orchestrator |  "lvm_volumes": [ 2026-02-05 00:41:27.390885 | orchestrator |  { 2026-02-05 00:41:27.390890 | orchestrator |  "data": "osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c", 2026-02-05 00:41:27.390894 | orchestrator |  "data_vg": "ceph-3edfc207-63bb-5e8f-b635-306c655bc02c" 2026-02-05 00:41:27.390899 | orchestrator |  }, 2026-02-05 00:41:27.390903 | orchestrator |  { 2026-02-05 00:41:27.390907 | orchestrator |  "data": "osd-block-121c279b-9e45-54e8-9359-e1d452607edd", 2026-02-05 00:41:27.390912 | orchestrator |  "data_vg": "ceph-121c279b-9e45-54e8-9359-e1d452607edd" 2026-02-05 00:41:27.390916 | orchestrator |  } 2026-02-05 00:41:27.390921 | orchestrator |  ] 2026-02-05 00:41:27.390928 | orchestrator |  } 2026-02-05 00:41:27.390933 | orchestrator | } 2026-02-05 00:41:27.390937 | orchestrator | 2026-02-05 00:41:27.390942 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-05 00:41:27.390946 | orchestrator | Thursday 05 February 2026 00:41:26 +0000 (0:00:00.192) 0:00:34.778 ***** 2026-02-05 00:41:27.390950 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-05 00:41:27.390955 | orchestrator | 2026-02-05 00:41:27.390959 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:41:27.390963 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 00:41:27.390969 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 00:41:27.390974 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 00:41:27.390978 | orchestrator | 2026-02-05 00:41:27.390983 | orchestrator | 2026-02-05 00:41:27.390986 | orchestrator | 2026-02-05 00:41:27.390990 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:41:27.390994 | orchestrator | Thursday 05 February 2026 00:41:27 +0000 (0:00:00.908) 0:00:35.686 ***** 2026-02-05 00:41:27.390997 | orchestrator | =============================================================================== 2026-02-05 00:41:27.391001 | orchestrator | Write configuration file ------------------------------------------------ 3.41s 2026-02-05 00:41:27.391005 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2026-02-05 00:41:27.391009 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2026-02-05 00:41:27.391012 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.96s 2026-02-05 00:41:27.391019 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2026-02-05 00:41:27.391023 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-02-05 00:41:27.391027 | orchestrator | Print configuration data ------------------------------------------------ 0.67s 2026-02-05 00:41:27.391031 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-02-05 00:41:27.391034 | orchestrator | Get initial list of available block devices ----------------------------- 0.60s 2026-02-05 00:41:27.391038 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2026-02-05 00:41:27.391042 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-02-05 00:41:27.391045 | orchestrator | Print WAL devices ------------------------------------------------------- 0.55s 2026-02-05 00:41:27.391049 | orchestrator | Set DB devices config data ---------------------------------------------- 0.53s 2026-02-05 00:41:27.391055 | orchestrator | Add known partitions to the list of available block devices ------------- 0.52s 2026-02-05 00:41:27.590986 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.51s 2026-02-05 00:41:27.591071 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-02-05 00:41:27.591081 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-02-05 00:41:27.591088 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-02-05 00:41:27.591095 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.48s 2026-02-05 00:41:27.591102 | orchestrator | Add known partitions to the list of available block devices ------------- 0.47s 2026-02-05 00:41:49.759094 | orchestrator | 2026-02-05 00:41:49 | INFO  | Task 2a254051-902a-4a20-948c-06662305b320 (sync inventory) is running in background. Output coming soon. 2026-02-05 00:42:15.383790 | orchestrator | 2026-02-05 00:41:51 | INFO  | Starting group_vars file reorganization 2026-02-05 00:42:15.383907 | orchestrator | 2026-02-05 00:41:51 | INFO  | Moved 0 file(s) to their respective directories 2026-02-05 00:42:15.383924 | orchestrator | 2026-02-05 00:41:51 | INFO  | Group_vars file reorganization completed 2026-02-05 00:42:15.383936 | orchestrator | 2026-02-05 00:41:54 | INFO  | Starting variable preparation from inventory 2026-02-05 00:42:15.383947 | orchestrator | 2026-02-05 00:41:56 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-05 00:42:15.383959 | orchestrator | 2026-02-05 00:41:56 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-05 00:42:15.383991 | orchestrator | 2026-02-05 00:41:57 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-05 00:42:15.384003 | orchestrator | 2026-02-05 00:41:57 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-05 00:42:15.384015 | orchestrator | 2026-02-05 00:41:57 | INFO  | Variable preparation completed 2026-02-05 00:42:15.384026 | orchestrator | 2026-02-05 00:41:58 | INFO  | Starting inventory overwrite handling 2026-02-05 00:42:15.384037 | orchestrator | 2026-02-05 00:41:58 | INFO  | Handling group overwrites in 99-overwrite 2026-02-05 00:42:15.384053 | orchestrator | 2026-02-05 00:41:58 | INFO  | Removing group frr:children from 60-generic 2026-02-05 00:42:15.384065 | orchestrator | 2026-02-05 00:41:58 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-05 00:42:15.384076 | orchestrator | 2026-02-05 00:41:58 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-05 00:42:15.384087 | orchestrator | 2026-02-05 00:41:58 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-05 00:42:15.384098 | orchestrator | 2026-02-05 00:41:58 | INFO  | Handling group overwrites in 20-roles 2026-02-05 00:42:15.384109 | orchestrator | 2026-02-05 00:41:58 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-05 00:42:15.384145 | orchestrator | 2026-02-05 00:41:58 | INFO  | Removed 5 group(s) in total 2026-02-05 00:42:15.384156 | orchestrator | 2026-02-05 00:41:58 | INFO  | Inventory overwrite handling completed 2026-02-05 00:42:15.384167 | orchestrator | 2026-02-05 00:41:59 | INFO  | Starting merge of inventory files 2026-02-05 00:42:15.384178 | orchestrator | 2026-02-05 00:41:59 | INFO  | Inventory files merged successfully 2026-02-05 00:42:15.384189 | orchestrator | 2026-02-05 00:42:03 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-05 00:42:15.384199 | orchestrator | 2026-02-05 00:42:14 | INFO  | Successfully wrote ClusterShell configuration 2026-02-05 00:42:15.384211 | orchestrator | [master 4d6571c] 2026-02-05-00-42 2026-02-05 00:42:15.384222 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-05 00:42:17.427020 | orchestrator | 2026-02-05 00:42:17 | INFO  | Task 0b0eb038-0755-4e24-9ac1-8831f93b8a4e (ceph-create-lvm-devices) was prepared for execution. 2026-02-05 00:42:17.427114 | orchestrator | 2026-02-05 00:42:17 | INFO  | It takes a moment until task 0b0eb038-0755-4e24-9ac1-8831f93b8a4e (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-05 00:42:27.194341 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 00:42:27.194417 | orchestrator | 2.16.14 2026-02-05 00:42:27.194424 | orchestrator | 2026-02-05 00:42:27.194429 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-05 00:42:27.194434 | orchestrator | 2026-02-05 00:42:27.194438 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 00:42:27.194443 | orchestrator | Thursday 05 February 2026 00:42:20 +0000 (0:00:00.226) 0:00:00.226 ***** 2026-02-05 00:42:27.194448 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 00:42:27.194453 | orchestrator | 2026-02-05 00:42:27.194459 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 00:42:27.194466 | orchestrator | Thursday 05 February 2026 00:42:20 +0000 (0:00:00.227) 0:00:00.453 ***** 2026-02-05 00:42:27.194474 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:42:27.194511 | orchestrator | 2026-02-05 00:42:27.194517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194524 | orchestrator | Thursday 05 February 2026 00:42:21 +0000 (0:00:00.238) 0:00:00.691 ***** 2026-02-05 00:42:27.194531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-05 00:42:27.194537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-05 00:42:27.194543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-05 00:42:27.194549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-05 00:42:27.194555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-05 00:42:27.194561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-05 00:42:27.194568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-05 00:42:27.194574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-05 00:42:27.194580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-05 00:42:27.194587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-05 00:42:27.194594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-05 00:42:27.194600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-05 00:42:27.194607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-05 00:42:27.194634 | orchestrator | 2026-02-05 00:42:27.194641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194646 | orchestrator | Thursday 05 February 2026 00:42:21 +0000 (0:00:00.489) 0:00:01.180 ***** 2026-02-05 00:42:27.194652 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.194658 | orchestrator | 2026-02-05 00:42:27.194665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194671 | orchestrator | Thursday 05 February 2026 00:42:21 +0000 (0:00:00.169) 0:00:01.349 ***** 2026-02-05 00:42:27.194677 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.194684 | orchestrator | 2026-02-05 00:42:27.194690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194697 | orchestrator | Thursday 05 February 2026 00:42:22 +0000 (0:00:00.170) 0:00:01.520 ***** 2026-02-05 00:42:27.194703 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.194709 | orchestrator | 2026-02-05 00:42:27.194715 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194722 | orchestrator | Thursday 05 February 2026 00:42:22 +0000 (0:00:00.179) 0:00:01.700 ***** 2026-02-05 00:42:27.194728 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.194734 | orchestrator | 2026-02-05 00:42:27.194739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194745 | orchestrator | Thursday 05 February 2026 00:42:22 +0000 (0:00:00.203) 0:00:01.904 ***** 2026-02-05 00:42:27.194751 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.194758 | orchestrator | 2026-02-05 00:42:27.194764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194770 | orchestrator | Thursday 05 February 2026 00:42:22 +0000 (0:00:00.177) 0:00:02.081 ***** 2026-02-05 00:42:27.194776 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.194782 | orchestrator | 2026-02-05 00:42:27.194788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194794 | orchestrator | Thursday 05 February 2026 00:42:22 +0000 (0:00:00.198) 0:00:02.279 ***** 2026-02-05 00:42:27.194800 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.194806 | orchestrator | 2026-02-05 00:42:27.194812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194818 | orchestrator | Thursday 05 February 2026 00:42:22 +0000 (0:00:00.165) 0:00:02.445 ***** 2026-02-05 00:42:27.194824 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.194830 | orchestrator | 2026-02-05 00:42:27.194836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194842 | orchestrator | Thursday 05 February 2026 00:42:23 +0000 (0:00:00.184) 0:00:02.630 ***** 2026-02-05 00:42:27.194849 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f) 2026-02-05 00:42:27.194856 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f) 2026-02-05 00:42:27.194862 | orchestrator | 2026-02-05 00:42:27.194868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194890 | orchestrator | Thursday 05 February 2026 00:42:23 +0000 (0:00:00.360) 0:00:02.990 ***** 2026-02-05 00:42:27.194897 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d601120f-cbb3-4953-a30b-917ccea713c0) 2026-02-05 00:42:27.194903 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d601120f-cbb3-4953-a30b-917ccea713c0) 2026-02-05 00:42:27.194909 | orchestrator | 2026-02-05 00:42:27.194915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194921 | orchestrator | Thursday 05 February 2026 00:42:24 +0000 (0:00:00.539) 0:00:03.530 ***** 2026-02-05 00:42:27.194928 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0f4e2151-cc71-4085-93f0-18395b8a78d9) 2026-02-05 00:42:27.194934 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0f4e2151-cc71-4085-93f0-18395b8a78d9) 2026-02-05 00:42:27.194946 | orchestrator | 2026-02-05 00:42:27.194953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194959 | orchestrator | Thursday 05 February 2026 00:42:24 +0000 (0:00:00.539) 0:00:04.069 ***** 2026-02-05 00:42:27.194966 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e6da1746-b16d-4279-a6c0-a95c954f705d) 2026-02-05 00:42:27.194972 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e6da1746-b16d-4279-a6c0-a95c954f705d) 2026-02-05 00:42:27.194978 | orchestrator | 2026-02-05 00:42:27.194985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:27.194991 | orchestrator | Thursday 05 February 2026 00:42:25 +0000 (0:00:00.675) 0:00:04.744 ***** 2026-02-05 00:42:27.194998 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 00:42:27.195004 | orchestrator | 2026-02-05 00:42:27.195009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:27.195015 | orchestrator | Thursday 05 February 2026 00:42:25 +0000 (0:00:00.286) 0:00:05.031 ***** 2026-02-05 00:42:27.195022 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-05 00:42:27.195029 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-05 00:42:27.195035 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-05 00:42:27.195057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-05 00:42:27.195063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-05 00:42:27.195070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-05 00:42:27.195076 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-05 00:42:27.195082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-05 00:42:27.195088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-05 00:42:27.195095 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-05 00:42:27.195101 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-05 00:42:27.195110 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-05 00:42:27.195117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-05 00:42:27.195123 | orchestrator | 2026-02-05 00:42:27.195130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:27.195136 | orchestrator | Thursday 05 February 2026 00:42:25 +0000 (0:00:00.392) 0:00:05.424 ***** 2026-02-05 00:42:27.195142 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.195149 | orchestrator | 2026-02-05 00:42:27.195155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:27.195161 | orchestrator | Thursday 05 February 2026 00:42:26 +0000 (0:00:00.203) 0:00:05.627 ***** 2026-02-05 00:42:27.195168 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.195173 | orchestrator | 2026-02-05 00:42:27.195177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:27.195182 | orchestrator | Thursday 05 February 2026 00:42:26 +0000 (0:00:00.183) 0:00:05.811 ***** 2026-02-05 00:42:27.195187 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.195191 | orchestrator | 2026-02-05 00:42:27.195195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:27.195200 | orchestrator | Thursday 05 February 2026 00:42:26 +0000 (0:00:00.177) 0:00:05.988 ***** 2026-02-05 00:42:27.195204 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.195213 | orchestrator | 2026-02-05 00:42:27.195217 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:27.195222 | orchestrator | Thursday 05 February 2026 00:42:26 +0000 (0:00:00.159) 0:00:06.147 ***** 2026-02-05 00:42:27.195226 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.195230 | orchestrator | 2026-02-05 00:42:27.195235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:27.195239 | orchestrator | Thursday 05 February 2026 00:42:26 +0000 (0:00:00.161) 0:00:06.308 ***** 2026-02-05 00:42:27.195244 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.195248 | orchestrator | 2026-02-05 00:42:27.195252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:27.195257 | orchestrator | Thursday 05 February 2026 00:42:27 +0000 (0:00:00.152) 0:00:06.460 ***** 2026-02-05 00:42:27.195261 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:27.195266 | orchestrator | 2026-02-05 00:42:27.195273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:34.438672 | orchestrator | Thursday 05 February 2026 00:42:27 +0000 (0:00:00.187) 0:00:06.648 ***** 2026-02-05 00:42:34.438781 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.438800 | orchestrator | 2026-02-05 00:42:34.438813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:34.438825 | orchestrator | Thursday 05 February 2026 00:42:27 +0000 (0:00:00.180) 0:00:06.828 ***** 2026-02-05 00:42:34.438837 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-05 00:42:34.438849 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-05 00:42:34.438860 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-05 00:42:34.438871 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-05 00:42:34.438882 | orchestrator | 2026-02-05 00:42:34.438893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:34.438904 | orchestrator | Thursday 05 February 2026 00:42:28 +0000 (0:00:00.771) 0:00:07.599 ***** 2026-02-05 00:42:34.438915 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.438926 | orchestrator | 2026-02-05 00:42:34.438937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:34.438948 | orchestrator | Thursday 05 February 2026 00:42:28 +0000 (0:00:00.181) 0:00:07.780 ***** 2026-02-05 00:42:34.438959 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.438970 | orchestrator | 2026-02-05 00:42:34.438981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:34.438992 | orchestrator | Thursday 05 February 2026 00:42:28 +0000 (0:00:00.168) 0:00:07.949 ***** 2026-02-05 00:42:34.439004 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.439015 | orchestrator | 2026-02-05 00:42:34.439026 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:34.439037 | orchestrator | Thursday 05 February 2026 00:42:28 +0000 (0:00:00.189) 0:00:08.139 ***** 2026-02-05 00:42:34.439048 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.439058 | orchestrator | 2026-02-05 00:42:34.439069 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-05 00:42:34.439080 | orchestrator | Thursday 05 February 2026 00:42:28 +0000 (0:00:00.168) 0:00:08.308 ***** 2026-02-05 00:42:34.439091 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.439102 | orchestrator | 2026-02-05 00:42:34.439113 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-05 00:42:34.439124 | orchestrator | Thursday 05 February 2026 00:42:28 +0000 (0:00:00.141) 0:00:08.449 ***** 2026-02-05 00:42:34.439136 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3e842383-5890-511f-b982-bff6d8042060'}}) 2026-02-05 00:42:34.439147 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '22ded513-57d8-573e-a796-c8381d672537'}}) 2026-02-05 00:42:34.439158 | orchestrator | 2026-02-05 00:42:34.439169 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-05 00:42:34.439205 | orchestrator | Thursday 05 February 2026 00:42:29 +0000 (0:00:00.154) 0:00:08.604 ***** 2026-02-05 00:42:34.439220 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'}) 2026-02-05 00:42:34.439235 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'}) 2026-02-05 00:42:34.439248 | orchestrator | 2026-02-05 00:42:34.439262 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-05 00:42:34.439274 | orchestrator | Thursday 05 February 2026 00:42:31 +0000 (0:00:01.941) 0:00:10.545 ***** 2026-02-05 00:42:34.439288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:34.439301 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:34.439314 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.439327 | orchestrator | 2026-02-05 00:42:34.439339 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-05 00:42:34.439350 | orchestrator | Thursday 05 February 2026 00:42:31 +0000 (0:00:00.117) 0:00:10.663 ***** 2026-02-05 00:42:34.439361 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'}) 2026-02-05 00:42:34.439371 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'}) 2026-02-05 00:42:34.439382 | orchestrator | 2026-02-05 00:42:34.439393 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-05 00:42:34.439404 | orchestrator | Thursday 05 February 2026 00:42:32 +0000 (0:00:01.438) 0:00:12.102 ***** 2026-02-05 00:42:34.439415 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:34.439426 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:34.439437 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.439447 | orchestrator | 2026-02-05 00:42:34.439458 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-05 00:42:34.439469 | orchestrator | Thursday 05 February 2026 00:42:32 +0000 (0:00:00.134) 0:00:12.236 ***** 2026-02-05 00:42:34.439526 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.439539 | orchestrator | 2026-02-05 00:42:34.439550 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-05 00:42:34.439561 | orchestrator | Thursday 05 February 2026 00:42:32 +0000 (0:00:00.113) 0:00:12.350 ***** 2026-02-05 00:42:34.439572 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:34.439583 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:34.439594 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.439604 | orchestrator | 2026-02-05 00:42:34.439615 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-05 00:42:34.439626 | orchestrator | Thursday 05 February 2026 00:42:33 +0000 (0:00:00.315) 0:00:12.665 ***** 2026-02-05 00:42:34.439636 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.439647 | orchestrator | 2026-02-05 00:42:34.439658 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-05 00:42:34.439669 | orchestrator | Thursday 05 February 2026 00:42:33 +0000 (0:00:00.139) 0:00:12.805 ***** 2026-02-05 00:42:34.439688 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:34.439699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:34.439710 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.439720 | orchestrator | 2026-02-05 00:42:34.439731 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-05 00:42:34.439742 | orchestrator | Thursday 05 February 2026 00:42:33 +0000 (0:00:00.160) 0:00:12.965 ***** 2026-02-05 00:42:34.439753 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.439763 | orchestrator | 2026-02-05 00:42:34.439774 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-05 00:42:34.439785 | orchestrator | Thursday 05 February 2026 00:42:33 +0000 (0:00:00.141) 0:00:13.106 ***** 2026-02-05 00:42:34.439796 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:34.439807 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:34.439817 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.439828 | orchestrator | 2026-02-05 00:42:34.439839 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-05 00:42:34.439850 | orchestrator | Thursday 05 February 2026 00:42:33 +0000 (0:00:00.138) 0:00:13.245 ***** 2026-02-05 00:42:34.439861 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:42:34.439871 | orchestrator | 2026-02-05 00:42:34.439882 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-05 00:42:34.439911 | orchestrator | Thursday 05 February 2026 00:42:33 +0000 (0:00:00.115) 0:00:13.361 ***** 2026-02-05 00:42:34.439927 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:34.439939 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:34.439950 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.439960 | orchestrator | 2026-02-05 00:42:34.439971 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-05 00:42:34.439982 | orchestrator | Thursday 05 February 2026 00:42:34 +0000 (0:00:00.147) 0:00:13.508 ***** 2026-02-05 00:42:34.439993 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:34.440004 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:34.440015 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.440025 | orchestrator | 2026-02-05 00:42:34.440036 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-05 00:42:34.440047 | orchestrator | Thursday 05 February 2026 00:42:34 +0000 (0:00:00.141) 0:00:13.650 ***** 2026-02-05 00:42:34.440058 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:34.440069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:34.440080 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.440090 | orchestrator | 2026-02-05 00:42:34.440101 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-05 00:42:34.440112 | orchestrator | Thursday 05 February 2026 00:42:34 +0000 (0:00:00.138) 0:00:13.789 ***** 2026-02-05 00:42:34.440131 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:34.440142 | orchestrator | 2026-02-05 00:42:34.440153 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-05 00:42:34.440170 | orchestrator | Thursday 05 February 2026 00:42:34 +0000 (0:00:00.103) 0:00:13.892 ***** 2026-02-05 00:42:40.239011 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.239138 | orchestrator | 2026-02-05 00:42:40.239166 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-05 00:42:40.239187 | orchestrator | Thursday 05 February 2026 00:42:34 +0000 (0:00:00.113) 0:00:14.005 ***** 2026-02-05 00:42:40.239209 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.239228 | orchestrator | 2026-02-05 00:42:40.239247 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-05 00:42:40.239266 | orchestrator | Thursday 05 February 2026 00:42:34 +0000 (0:00:00.129) 0:00:14.135 ***** 2026-02-05 00:42:40.239285 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 00:42:40.239304 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-05 00:42:40.239325 | orchestrator | } 2026-02-05 00:42:40.239344 | orchestrator | 2026-02-05 00:42:40.239364 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-05 00:42:40.239381 | orchestrator | Thursday 05 February 2026 00:42:34 +0000 (0:00:00.260) 0:00:14.395 ***** 2026-02-05 00:42:40.239394 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 00:42:40.239413 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-05 00:42:40.239430 | orchestrator | } 2026-02-05 00:42:40.239448 | orchestrator | 2026-02-05 00:42:40.239467 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-05 00:42:40.239596 | orchestrator | Thursday 05 February 2026 00:42:35 +0000 (0:00:00.118) 0:00:14.513 ***** 2026-02-05 00:42:40.239618 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 00:42:40.239634 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-05 00:42:40.239648 | orchestrator | } 2026-02-05 00:42:40.239660 | orchestrator | 2026-02-05 00:42:40.239673 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-05 00:42:40.239692 | orchestrator | Thursday 05 February 2026 00:42:35 +0000 (0:00:00.154) 0:00:14.667 ***** 2026-02-05 00:42:40.239748 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:42:40.239770 | orchestrator | 2026-02-05 00:42:40.239790 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-05 00:42:40.239810 | orchestrator | Thursday 05 February 2026 00:42:35 +0000 (0:00:00.644) 0:00:15.312 ***** 2026-02-05 00:42:40.239831 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:42:40.239851 | orchestrator | 2026-02-05 00:42:40.239872 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-05 00:42:40.239894 | orchestrator | Thursday 05 February 2026 00:42:36 +0000 (0:00:00.476) 0:00:15.789 ***** 2026-02-05 00:42:40.239915 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:42:40.239949 | orchestrator | 2026-02-05 00:42:40.239969 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-05 00:42:40.239988 | orchestrator | Thursday 05 February 2026 00:42:36 +0000 (0:00:00.496) 0:00:16.285 ***** 2026-02-05 00:42:40.240008 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:42:40.240026 | orchestrator | 2026-02-05 00:42:40.240042 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-05 00:42:40.240053 | orchestrator | Thursday 05 February 2026 00:42:36 +0000 (0:00:00.161) 0:00:16.447 ***** 2026-02-05 00:42:40.240064 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240075 | orchestrator | 2026-02-05 00:42:40.240086 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-05 00:42:40.240097 | orchestrator | Thursday 05 February 2026 00:42:37 +0000 (0:00:00.110) 0:00:16.557 ***** 2026-02-05 00:42:40.240108 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240119 | orchestrator | 2026-02-05 00:42:40.240130 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-05 00:42:40.240168 | orchestrator | Thursday 05 February 2026 00:42:37 +0000 (0:00:00.094) 0:00:16.652 ***** 2026-02-05 00:42:40.240194 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 00:42:40.240206 | orchestrator |  "vgs_report": { 2026-02-05 00:42:40.240218 | orchestrator |  "vg": [] 2026-02-05 00:42:40.240229 | orchestrator |  } 2026-02-05 00:42:40.240240 | orchestrator | } 2026-02-05 00:42:40.240251 | orchestrator | 2026-02-05 00:42:40.240262 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-05 00:42:40.240272 | orchestrator | Thursday 05 February 2026 00:42:37 +0000 (0:00:00.134) 0:00:16.786 ***** 2026-02-05 00:42:40.240283 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240294 | orchestrator | 2026-02-05 00:42:40.240304 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-05 00:42:40.240315 | orchestrator | Thursday 05 February 2026 00:42:37 +0000 (0:00:00.131) 0:00:16.918 ***** 2026-02-05 00:42:40.240331 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240349 | orchestrator | 2026-02-05 00:42:40.240369 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-05 00:42:40.240388 | orchestrator | Thursday 05 February 2026 00:42:37 +0000 (0:00:00.128) 0:00:17.046 ***** 2026-02-05 00:42:40.240407 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240420 | orchestrator | 2026-02-05 00:42:40.240431 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-05 00:42:40.240441 | orchestrator | Thursday 05 February 2026 00:42:37 +0000 (0:00:00.274) 0:00:17.320 ***** 2026-02-05 00:42:40.240452 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240462 | orchestrator | 2026-02-05 00:42:40.240473 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-05 00:42:40.240516 | orchestrator | Thursday 05 February 2026 00:42:37 +0000 (0:00:00.106) 0:00:17.427 ***** 2026-02-05 00:42:40.240527 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240538 | orchestrator | 2026-02-05 00:42:40.240548 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-05 00:42:40.240559 | orchestrator | Thursday 05 February 2026 00:42:38 +0000 (0:00:00.109) 0:00:17.537 ***** 2026-02-05 00:42:40.240570 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240580 | orchestrator | 2026-02-05 00:42:40.240591 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-05 00:42:40.240602 | orchestrator | Thursday 05 February 2026 00:42:38 +0000 (0:00:00.129) 0:00:17.667 ***** 2026-02-05 00:42:40.240612 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240623 | orchestrator | 2026-02-05 00:42:40.240633 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-05 00:42:40.240644 | orchestrator | Thursday 05 February 2026 00:42:38 +0000 (0:00:00.112) 0:00:17.780 ***** 2026-02-05 00:42:40.240677 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240688 | orchestrator | 2026-02-05 00:42:40.240699 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-05 00:42:40.240710 | orchestrator | Thursday 05 February 2026 00:42:38 +0000 (0:00:00.123) 0:00:17.903 ***** 2026-02-05 00:42:40.240720 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240731 | orchestrator | 2026-02-05 00:42:40.240741 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-05 00:42:40.240752 | orchestrator | Thursday 05 February 2026 00:42:38 +0000 (0:00:00.115) 0:00:18.018 ***** 2026-02-05 00:42:40.240762 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240773 | orchestrator | 2026-02-05 00:42:40.240784 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-05 00:42:40.240794 | orchestrator | Thursday 05 February 2026 00:42:38 +0000 (0:00:00.189) 0:00:18.207 ***** 2026-02-05 00:42:40.240805 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240815 | orchestrator | 2026-02-05 00:42:40.240826 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-05 00:42:40.240837 | orchestrator | Thursday 05 February 2026 00:42:38 +0000 (0:00:00.143) 0:00:18.351 ***** 2026-02-05 00:42:40.240858 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240868 | orchestrator | 2026-02-05 00:42:40.240879 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-05 00:42:40.240890 | orchestrator | Thursday 05 February 2026 00:42:39 +0000 (0:00:00.124) 0:00:18.475 ***** 2026-02-05 00:42:40.240900 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240911 | orchestrator | 2026-02-05 00:42:40.240922 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-05 00:42:40.240932 | orchestrator | Thursday 05 February 2026 00:42:39 +0000 (0:00:00.114) 0:00:18.589 ***** 2026-02-05 00:42:40.240942 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.240953 | orchestrator | 2026-02-05 00:42:40.240963 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-05 00:42:40.240974 | orchestrator | Thursday 05 February 2026 00:42:39 +0000 (0:00:00.114) 0:00:18.704 ***** 2026-02-05 00:42:40.240986 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:40.240999 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:40.241009 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.241020 | orchestrator | 2026-02-05 00:42:40.241031 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-05 00:42:40.241041 | orchestrator | Thursday 05 February 2026 00:42:39 +0000 (0:00:00.255) 0:00:18.960 ***** 2026-02-05 00:42:40.241052 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:40.241063 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:40.241073 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.241084 | orchestrator | 2026-02-05 00:42:40.241095 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-05 00:42:40.241106 | orchestrator | Thursday 05 February 2026 00:42:39 +0000 (0:00:00.128) 0:00:19.088 ***** 2026-02-05 00:42:40.241116 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:40.241127 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:40.241138 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.241149 | orchestrator | 2026-02-05 00:42:40.241159 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-05 00:42:40.241170 | orchestrator | Thursday 05 February 2026 00:42:39 +0000 (0:00:00.142) 0:00:19.231 ***** 2026-02-05 00:42:40.241181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:40.241191 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:40.241202 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.241213 | orchestrator | 2026-02-05 00:42:40.241223 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-05 00:42:40.241234 | orchestrator | Thursday 05 February 2026 00:42:39 +0000 (0:00:00.159) 0:00:19.391 ***** 2026-02-05 00:42:40.241244 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:40.241255 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:40.241272 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:40.241283 | orchestrator | 2026-02-05 00:42:40.241294 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-05 00:42:40.241314 | orchestrator | Thursday 05 February 2026 00:42:40 +0000 (0:00:00.154) 0:00:19.545 ***** 2026-02-05 00:42:40.241332 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:45.734089 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:45.734173 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:45.734181 | orchestrator | 2026-02-05 00:42:45.734188 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-05 00:42:45.734196 | orchestrator | Thursday 05 February 2026 00:42:40 +0000 (0:00:00.147) 0:00:19.693 ***** 2026-02-05 00:42:45.734202 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:45.734208 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:45.734214 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:45.734219 | orchestrator | 2026-02-05 00:42:45.734224 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-05 00:42:45.734230 | orchestrator | Thursday 05 February 2026 00:42:40 +0000 (0:00:00.181) 0:00:19.874 ***** 2026-02-05 00:42:45.734235 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:45.734241 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:45.734247 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:45.734252 | orchestrator | 2026-02-05 00:42:45.734258 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-05 00:42:45.734263 | orchestrator | Thursday 05 February 2026 00:42:40 +0000 (0:00:00.178) 0:00:20.053 ***** 2026-02-05 00:42:45.734269 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:42:45.734275 | orchestrator | 2026-02-05 00:42:45.734280 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-05 00:42:45.734286 | orchestrator | Thursday 05 February 2026 00:42:41 +0000 (0:00:00.516) 0:00:20.569 ***** 2026-02-05 00:42:45.734291 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:42:45.734296 | orchestrator | 2026-02-05 00:42:45.734302 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-05 00:42:45.734307 | orchestrator | Thursday 05 February 2026 00:42:41 +0000 (0:00:00.538) 0:00:21.108 ***** 2026-02-05 00:42:45.734313 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:42:45.734318 | orchestrator | 2026-02-05 00:42:45.734323 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-05 00:42:45.734329 | orchestrator | Thursday 05 February 2026 00:42:41 +0000 (0:00:00.154) 0:00:21.262 ***** 2026-02-05 00:42:45.734334 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'vg_name': 'ceph-22ded513-57d8-573e-a796-c8381d672537'}) 2026-02-05 00:42:45.734352 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'vg_name': 'ceph-3e842383-5890-511f-b982-bff6d8042060'}) 2026-02-05 00:42:45.734358 | orchestrator | 2026-02-05 00:42:45.734363 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-05 00:42:45.734369 | orchestrator | Thursday 05 February 2026 00:42:41 +0000 (0:00:00.180) 0:00:21.443 ***** 2026-02-05 00:42:45.734374 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:45.734396 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:45.734402 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:45.734411 | orchestrator | 2026-02-05 00:42:45.734421 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-05 00:42:45.734427 | orchestrator | Thursday 05 February 2026 00:42:42 +0000 (0:00:00.353) 0:00:21.796 ***** 2026-02-05 00:42:45.734432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:45.734438 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:45.734443 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:45.734448 | orchestrator | 2026-02-05 00:42:45.734454 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-05 00:42:45.734459 | orchestrator | Thursday 05 February 2026 00:42:42 +0000 (0:00:00.160) 0:00:21.956 ***** 2026-02-05 00:42:45.734465 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'})  2026-02-05 00:42:45.734470 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'})  2026-02-05 00:42:45.734519 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:45.734529 | orchestrator | 2026-02-05 00:42:45.734547 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-05 00:42:45.734556 | orchestrator | Thursday 05 February 2026 00:42:42 +0000 (0:00:00.143) 0:00:22.100 ***** 2026-02-05 00:42:45.734581 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 00:42:45.734600 | orchestrator |  "lvm_report": { 2026-02-05 00:42:45.734610 | orchestrator |  "lv": [ 2026-02-05 00:42:45.734619 | orchestrator |  { 2026-02-05 00:42:45.734628 | orchestrator |  "lv_name": "osd-block-22ded513-57d8-573e-a796-c8381d672537", 2026-02-05 00:42:45.734638 | orchestrator |  "vg_name": "ceph-22ded513-57d8-573e-a796-c8381d672537" 2026-02-05 00:42:45.734646 | orchestrator |  }, 2026-02-05 00:42:45.734655 | orchestrator |  { 2026-02-05 00:42:45.734664 | orchestrator |  "lv_name": "osd-block-3e842383-5890-511f-b982-bff6d8042060", 2026-02-05 00:42:45.734672 | orchestrator |  "vg_name": "ceph-3e842383-5890-511f-b982-bff6d8042060" 2026-02-05 00:42:45.734681 | orchestrator |  } 2026-02-05 00:42:45.734689 | orchestrator |  ], 2026-02-05 00:42:45.734696 | orchestrator |  "pv": [ 2026-02-05 00:42:45.734704 | orchestrator |  { 2026-02-05 00:42:45.734714 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-05 00:42:45.734724 | orchestrator |  "vg_name": "ceph-3e842383-5890-511f-b982-bff6d8042060" 2026-02-05 00:42:45.734733 | orchestrator |  }, 2026-02-05 00:42:45.734742 | orchestrator |  { 2026-02-05 00:42:45.734752 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-05 00:42:45.734762 | orchestrator |  "vg_name": "ceph-22ded513-57d8-573e-a796-c8381d672537" 2026-02-05 00:42:45.734771 | orchestrator |  } 2026-02-05 00:42:45.734779 | orchestrator |  ] 2026-02-05 00:42:45.734788 | orchestrator |  } 2026-02-05 00:42:45.734797 | orchestrator | } 2026-02-05 00:42:45.734806 | orchestrator | 2026-02-05 00:42:45.734814 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-05 00:42:45.734823 | orchestrator | 2026-02-05 00:42:45.734832 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 00:42:45.734841 | orchestrator | Thursday 05 February 2026 00:42:42 +0000 (0:00:00.320) 0:00:22.420 ***** 2026-02-05 00:42:45.734863 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-05 00:42:45.734872 | orchestrator | 2026-02-05 00:42:45.734882 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 00:42:45.734891 | orchestrator | Thursday 05 February 2026 00:42:43 +0000 (0:00:00.246) 0:00:22.667 ***** 2026-02-05 00:42:45.734900 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:42:45.734908 | orchestrator | 2026-02-05 00:42:45.734917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:45.734926 | orchestrator | Thursday 05 February 2026 00:42:43 +0000 (0:00:00.225) 0:00:22.892 ***** 2026-02-05 00:42:45.734935 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-05 00:42:45.734944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-05 00:42:45.734954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-05 00:42:45.734960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-05 00:42:45.734966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-05 00:42:45.734971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-05 00:42:45.734976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-05 00:42:45.734988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-05 00:42:45.734994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-05 00:42:45.734999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-05 00:42:45.735005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-05 00:42:45.735010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-05 00:42:45.735015 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-05 00:42:45.735021 | orchestrator | 2026-02-05 00:42:45.735026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:45.735031 | orchestrator | Thursday 05 February 2026 00:42:43 +0000 (0:00:00.509) 0:00:23.402 ***** 2026-02-05 00:42:45.735037 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:45.735042 | orchestrator | 2026-02-05 00:42:45.735047 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:45.735053 | orchestrator | Thursday 05 February 2026 00:42:44 +0000 (0:00:00.189) 0:00:23.591 ***** 2026-02-05 00:42:45.735058 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:45.735063 | orchestrator | 2026-02-05 00:42:45.735069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:45.735074 | orchestrator | Thursday 05 February 2026 00:42:44 +0000 (0:00:00.254) 0:00:23.846 ***** 2026-02-05 00:42:45.735079 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:45.735085 | orchestrator | 2026-02-05 00:42:45.735090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:45.735095 | orchestrator | Thursday 05 February 2026 00:42:45 +0000 (0:00:00.720) 0:00:24.567 ***** 2026-02-05 00:42:45.735101 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:45.735106 | orchestrator | 2026-02-05 00:42:45.735111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:45.735117 | orchestrator | Thursday 05 February 2026 00:42:45 +0000 (0:00:00.212) 0:00:24.779 ***** 2026-02-05 00:42:45.735125 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:45.735134 | orchestrator | 2026-02-05 00:42:45.735140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:45.735145 | orchestrator | Thursday 05 February 2026 00:42:45 +0000 (0:00:00.209) 0:00:24.988 ***** 2026-02-05 00:42:45.735156 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:45.735162 | orchestrator | 2026-02-05 00:42:45.735177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:56.990230 | orchestrator | Thursday 05 February 2026 00:42:45 +0000 (0:00:00.196) 0:00:25.185 ***** 2026-02-05 00:42:56.990350 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.990376 | orchestrator | 2026-02-05 00:42:56.990393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:56.990407 | orchestrator | Thursday 05 February 2026 00:42:45 +0000 (0:00:00.221) 0:00:25.407 ***** 2026-02-05 00:42:56.990420 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.990433 | orchestrator | 2026-02-05 00:42:56.990446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:56.990458 | orchestrator | Thursday 05 February 2026 00:42:46 +0000 (0:00:00.224) 0:00:25.631 ***** 2026-02-05 00:42:56.990508 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3) 2026-02-05 00:42:56.990604 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3) 2026-02-05 00:42:56.990617 | orchestrator | 2026-02-05 00:42:56.990626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:56.990634 | orchestrator | Thursday 05 February 2026 00:42:46 +0000 (0:00:00.465) 0:00:26.097 ***** 2026-02-05 00:42:56.990642 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c8222ed3-0da2-4bb4-b170-21b6f36ecb8d) 2026-02-05 00:42:56.990651 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c8222ed3-0da2-4bb4-b170-21b6f36ecb8d) 2026-02-05 00:42:56.990659 | orchestrator | 2026-02-05 00:42:56.990667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:56.990675 | orchestrator | Thursday 05 February 2026 00:42:47 +0000 (0:00:00.590) 0:00:26.688 ***** 2026-02-05 00:42:56.990683 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b7f472c8-b527-47c9-ac56-62f6f3e84fbf) 2026-02-05 00:42:56.990691 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b7f472c8-b527-47c9-ac56-62f6f3e84fbf) 2026-02-05 00:42:56.990699 | orchestrator | 2026-02-05 00:42:56.990706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:56.990714 | orchestrator | Thursday 05 February 2026 00:42:47 +0000 (0:00:00.537) 0:00:27.225 ***** 2026-02-05 00:42:56.990722 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a3293e5b-f1f9-462e-9781-4b1b679aef30) 2026-02-05 00:42:56.990730 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a3293e5b-f1f9-462e-9781-4b1b679aef30) 2026-02-05 00:42:56.990738 | orchestrator | 2026-02-05 00:42:56.990747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:56.990754 | orchestrator | Thursday 05 February 2026 00:42:48 +0000 (0:00:00.811) 0:00:28.036 ***** 2026-02-05 00:42:56.990762 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 00:42:56.990770 | orchestrator | 2026-02-05 00:42:56.990778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.990786 | orchestrator | Thursday 05 February 2026 00:42:49 +0000 (0:00:00.693) 0:00:28.730 ***** 2026-02-05 00:42:56.990794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-05 00:42:56.990803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-05 00:42:56.990811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-05 00:42:56.990819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-05 00:42:56.990826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-05 00:42:56.990857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-05 00:42:56.990888 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-05 00:42:56.990896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-05 00:42:56.990904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-05 00:42:56.990912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-05 00:42:56.990920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-05 00:42:56.990927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-05 00:42:56.990935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-05 00:42:56.990943 | orchestrator | 2026-02-05 00:42:56.990951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.990959 | orchestrator | Thursday 05 February 2026 00:42:50 +0000 (0:00:00.988) 0:00:29.719 ***** 2026-02-05 00:42:56.990966 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.990974 | orchestrator | 2026-02-05 00:42:56.990982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.990990 | orchestrator | Thursday 05 February 2026 00:42:50 +0000 (0:00:00.206) 0:00:29.925 ***** 2026-02-05 00:42:56.990997 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.991005 | orchestrator | 2026-02-05 00:42:56.991013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.991020 | orchestrator | Thursday 05 February 2026 00:42:50 +0000 (0:00:00.224) 0:00:30.150 ***** 2026-02-05 00:42:56.991028 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.991036 | orchestrator | 2026-02-05 00:42:56.991063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.991072 | orchestrator | Thursday 05 February 2026 00:42:50 +0000 (0:00:00.175) 0:00:30.325 ***** 2026-02-05 00:42:56.991080 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.991088 | orchestrator | 2026-02-05 00:42:56.991096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.991103 | orchestrator | Thursday 05 February 2026 00:42:51 +0000 (0:00:00.197) 0:00:30.522 ***** 2026-02-05 00:42:56.991111 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.991119 | orchestrator | 2026-02-05 00:42:56.991127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.991134 | orchestrator | Thursday 05 February 2026 00:42:51 +0000 (0:00:00.184) 0:00:30.707 ***** 2026-02-05 00:42:56.991142 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.991150 | orchestrator | 2026-02-05 00:42:56.991157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.991165 | orchestrator | Thursday 05 February 2026 00:42:51 +0000 (0:00:00.181) 0:00:30.888 ***** 2026-02-05 00:42:56.991173 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.991181 | orchestrator | 2026-02-05 00:42:56.991188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.991196 | orchestrator | Thursday 05 February 2026 00:42:51 +0000 (0:00:00.177) 0:00:31.066 ***** 2026-02-05 00:42:56.991204 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.991212 | orchestrator | 2026-02-05 00:42:56.991219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.991227 | orchestrator | Thursday 05 February 2026 00:42:51 +0000 (0:00:00.189) 0:00:31.255 ***** 2026-02-05 00:42:56.991235 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-05 00:42:56.991243 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-05 00:42:56.991251 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-05 00:42:56.991259 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-05 00:42:56.991267 | orchestrator | 2026-02-05 00:42:56.991275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.991290 | orchestrator | Thursday 05 February 2026 00:42:52 +0000 (0:00:00.777) 0:00:32.033 ***** 2026-02-05 00:42:56.991298 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.991305 | orchestrator | 2026-02-05 00:42:56.991313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.991321 | orchestrator | Thursday 05 February 2026 00:42:52 +0000 (0:00:00.173) 0:00:32.206 ***** 2026-02-05 00:42:56.991329 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.991336 | orchestrator | 2026-02-05 00:42:56.991344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.991352 | orchestrator | Thursday 05 February 2026 00:42:53 +0000 (0:00:00.475) 0:00:32.682 ***** 2026-02-05 00:42:56.991360 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.991383 | orchestrator | 2026-02-05 00:42:56.991391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:56.991398 | orchestrator | Thursday 05 February 2026 00:42:53 +0000 (0:00:00.210) 0:00:32.892 ***** 2026-02-05 00:42:56.991406 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.991414 | orchestrator | 2026-02-05 00:42:56.991422 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-05 00:42:56.991434 | orchestrator | Thursday 05 February 2026 00:42:53 +0000 (0:00:00.184) 0:00:33.076 ***** 2026-02-05 00:42:56.991442 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.991450 | orchestrator | 2026-02-05 00:42:56.991457 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-05 00:42:56.991465 | orchestrator | Thursday 05 February 2026 00:42:53 +0000 (0:00:00.123) 0:00:33.200 ***** 2026-02-05 00:42:56.991495 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '159372f8-6c52-51f3-a9af-3fbf7ffb45fe'}}) 2026-02-05 00:42:56.991504 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '523b4628-8322-5ebe-8cc3-60a2eeaa41a5'}}) 2026-02-05 00:42:56.991512 | orchestrator | 2026-02-05 00:42:56.991520 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-05 00:42:56.991528 | orchestrator | Thursday 05 February 2026 00:42:53 +0000 (0:00:00.177) 0:00:33.378 ***** 2026-02-05 00:42:56.991537 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'}) 2026-02-05 00:42:56.991546 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'}) 2026-02-05 00:42:56.991554 | orchestrator | 2026-02-05 00:42:56.991562 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-05 00:42:56.991570 | orchestrator | Thursday 05 February 2026 00:42:55 +0000 (0:00:01.646) 0:00:35.025 ***** 2026-02-05 00:42:56.991577 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:42:56.991587 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:42:56.991595 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:56.991603 | orchestrator | 2026-02-05 00:42:56.991610 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-05 00:42:56.991618 | orchestrator | Thursday 05 February 2026 00:42:55 +0000 (0:00:00.158) 0:00:35.183 ***** 2026-02-05 00:42:56.991626 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'}) 2026-02-05 00:42:56.991640 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'}) 2026-02-05 00:43:01.999018 | orchestrator | 2026-02-05 00:43:01.999151 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-05 00:43:01.999247 | orchestrator | Thursday 05 February 2026 00:42:56 +0000 (0:00:01.254) 0:00:36.438 ***** 2026-02-05 00:43:01.999273 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:01.999294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:01.999315 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:01.999336 | orchestrator | 2026-02-05 00:43:01.999356 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-05 00:43:01.999376 | orchestrator | Thursday 05 February 2026 00:42:57 +0000 (0:00:00.126) 0:00:36.564 ***** 2026-02-05 00:43:01.999395 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:01.999416 | orchestrator | 2026-02-05 00:43:01.999436 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-05 00:43:01.999457 | orchestrator | Thursday 05 February 2026 00:42:57 +0000 (0:00:00.121) 0:00:36.686 ***** 2026-02-05 00:43:01.999519 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:01.999538 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:01.999557 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:01.999569 | orchestrator | 2026-02-05 00:43:01.999580 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-05 00:43:01.999590 | orchestrator | Thursday 05 February 2026 00:42:57 +0000 (0:00:00.122) 0:00:36.808 ***** 2026-02-05 00:43:01.999600 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:01.999609 | orchestrator | 2026-02-05 00:43:01.999619 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-05 00:43:01.999629 | orchestrator | Thursday 05 February 2026 00:42:57 +0000 (0:00:00.126) 0:00:36.935 ***** 2026-02-05 00:43:01.999639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:01.999649 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:01.999658 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:01.999668 | orchestrator | 2026-02-05 00:43:01.999678 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-05 00:43:01.999704 | orchestrator | Thursday 05 February 2026 00:42:57 +0000 (0:00:00.260) 0:00:37.195 ***** 2026-02-05 00:43:01.999714 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:01.999723 | orchestrator | 2026-02-05 00:43:01.999733 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-05 00:43:01.999743 | orchestrator | Thursday 05 February 2026 00:42:57 +0000 (0:00:00.127) 0:00:37.322 ***** 2026-02-05 00:43:01.999752 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:01.999762 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:01.999772 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:01.999782 | orchestrator | 2026-02-05 00:43:01.999791 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-05 00:43:01.999801 | orchestrator | Thursday 05 February 2026 00:42:58 +0000 (0:00:00.141) 0:00:37.464 ***** 2026-02-05 00:43:01.999811 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:43:01.999821 | orchestrator | 2026-02-05 00:43:01.999831 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-05 00:43:01.999858 | orchestrator | Thursday 05 February 2026 00:42:58 +0000 (0:00:00.139) 0:00:37.604 ***** 2026-02-05 00:43:01.999874 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:01.999890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:01.999908 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:01.999923 | orchestrator | 2026-02-05 00:43:01.999940 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-05 00:43:01.999956 | orchestrator | Thursday 05 February 2026 00:42:58 +0000 (0:00:00.134) 0:00:37.738 ***** 2026-02-05 00:43:01.999973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:01.999985 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:01.999995 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:02.000004 | orchestrator | 2026-02-05 00:43:02.000015 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-05 00:43:02.000045 | orchestrator | Thursday 05 February 2026 00:42:58 +0000 (0:00:00.142) 0:00:37.881 ***** 2026-02-05 00:43:02.000056 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:02.000066 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:02.000076 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:02.000085 | orchestrator | 2026-02-05 00:43:02.000095 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-05 00:43:02.000105 | orchestrator | Thursday 05 February 2026 00:42:58 +0000 (0:00:00.143) 0:00:38.025 ***** 2026-02-05 00:43:02.000114 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:02.000124 | orchestrator | 2026-02-05 00:43:02.000134 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-05 00:43:02.000144 | orchestrator | Thursday 05 February 2026 00:42:58 +0000 (0:00:00.121) 0:00:38.147 ***** 2026-02-05 00:43:02.000153 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:02.000163 | orchestrator | 2026-02-05 00:43:02.000173 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-05 00:43:02.000182 | orchestrator | Thursday 05 February 2026 00:42:58 +0000 (0:00:00.126) 0:00:38.273 ***** 2026-02-05 00:43:02.000192 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:02.000202 | orchestrator | 2026-02-05 00:43:02.000211 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-05 00:43:02.000221 | orchestrator | Thursday 05 February 2026 00:42:58 +0000 (0:00:00.103) 0:00:38.377 ***** 2026-02-05 00:43:02.000231 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 00:43:02.000240 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-05 00:43:02.000255 | orchestrator | } 2026-02-05 00:43:02.000271 | orchestrator | 2026-02-05 00:43:02.000287 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-05 00:43:02.000303 | orchestrator | Thursday 05 February 2026 00:42:59 +0000 (0:00:00.122) 0:00:38.499 ***** 2026-02-05 00:43:02.000318 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 00:43:02.000335 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-05 00:43:02.000351 | orchestrator | } 2026-02-05 00:43:02.000367 | orchestrator | 2026-02-05 00:43:02.000383 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-05 00:43:02.000399 | orchestrator | Thursday 05 February 2026 00:42:59 +0000 (0:00:00.129) 0:00:38.629 ***** 2026-02-05 00:43:02.000427 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 00:43:02.000444 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-05 00:43:02.000461 | orchestrator | } 2026-02-05 00:43:02.000503 | orchestrator | 2026-02-05 00:43:02.000520 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-05 00:43:02.000536 | orchestrator | Thursday 05 February 2026 00:42:59 +0000 (0:00:00.267) 0:00:38.896 ***** 2026-02-05 00:43:02.000552 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:43:02.000568 | orchestrator | 2026-02-05 00:43:02.000583 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-05 00:43:02.000598 | orchestrator | Thursday 05 February 2026 00:42:59 +0000 (0:00:00.475) 0:00:39.372 ***** 2026-02-05 00:43:02.000614 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:43:02.000630 | orchestrator | 2026-02-05 00:43:02.000646 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-05 00:43:02.000661 | orchestrator | Thursday 05 February 2026 00:43:00 +0000 (0:00:00.502) 0:00:39.874 ***** 2026-02-05 00:43:02.000677 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:43:02.000693 | orchestrator | 2026-02-05 00:43:02.000709 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-05 00:43:02.000725 | orchestrator | Thursday 05 February 2026 00:43:00 +0000 (0:00:00.503) 0:00:40.378 ***** 2026-02-05 00:43:02.000741 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:43:02.000757 | orchestrator | 2026-02-05 00:43:02.000772 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-05 00:43:02.000789 | orchestrator | Thursday 05 February 2026 00:43:01 +0000 (0:00:00.172) 0:00:40.551 ***** 2026-02-05 00:43:02.000805 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:02.000821 | orchestrator | 2026-02-05 00:43:02.000849 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-05 00:43:02.000867 | orchestrator | Thursday 05 February 2026 00:43:01 +0000 (0:00:00.143) 0:00:40.694 ***** 2026-02-05 00:43:02.000886 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:02.000903 | orchestrator | 2026-02-05 00:43:02.000919 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-05 00:43:02.000935 | orchestrator | Thursday 05 February 2026 00:43:01 +0000 (0:00:00.110) 0:00:40.804 ***** 2026-02-05 00:43:02.000952 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 00:43:02.001002 | orchestrator |  "vgs_report": { 2026-02-05 00:43:02.001021 | orchestrator |  "vg": [] 2026-02-05 00:43:02.001038 | orchestrator |  } 2026-02-05 00:43:02.001054 | orchestrator | } 2026-02-05 00:43:02.001071 | orchestrator | 2026-02-05 00:43:02.001087 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-05 00:43:02.001103 | orchestrator | Thursday 05 February 2026 00:43:01 +0000 (0:00:00.135) 0:00:40.940 ***** 2026-02-05 00:43:02.001120 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:02.001136 | orchestrator | 2026-02-05 00:43:02.001152 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-05 00:43:02.001169 | orchestrator | Thursday 05 February 2026 00:43:01 +0000 (0:00:00.129) 0:00:41.070 ***** 2026-02-05 00:43:02.001185 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:02.001202 | orchestrator | 2026-02-05 00:43:02.001218 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-05 00:43:02.001235 | orchestrator | Thursday 05 February 2026 00:43:01 +0000 (0:00:00.134) 0:00:41.204 ***** 2026-02-05 00:43:02.001251 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:02.001267 | orchestrator | 2026-02-05 00:43:02.001284 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-05 00:43:02.001300 | orchestrator | Thursday 05 February 2026 00:43:01 +0000 (0:00:00.121) 0:00:41.326 ***** 2026-02-05 00:43:02.001317 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:02.001333 | orchestrator | 2026-02-05 00:43:02.001365 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-05 00:43:06.236597 | orchestrator | Thursday 05 February 2026 00:43:01 +0000 (0:00:00.125) 0:00:41.451 ***** 2026-02-05 00:43:06.236723 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.236736 | orchestrator | 2026-02-05 00:43:06.236745 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-05 00:43:06.236755 | orchestrator | Thursday 05 February 2026 00:43:02 +0000 (0:00:00.304) 0:00:41.756 ***** 2026-02-05 00:43:06.236798 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.236807 | orchestrator | 2026-02-05 00:43:06.236815 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-05 00:43:06.236824 | orchestrator | Thursday 05 February 2026 00:43:02 +0000 (0:00:00.122) 0:00:41.878 ***** 2026-02-05 00:43:06.236832 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.236840 | orchestrator | 2026-02-05 00:43:06.236849 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-05 00:43:06.236857 | orchestrator | Thursday 05 February 2026 00:43:02 +0000 (0:00:00.146) 0:00:42.025 ***** 2026-02-05 00:43:06.236865 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.236873 | orchestrator | 2026-02-05 00:43:06.236881 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-05 00:43:06.236889 | orchestrator | Thursday 05 February 2026 00:43:02 +0000 (0:00:00.135) 0:00:42.161 ***** 2026-02-05 00:43:06.236896 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.236904 | orchestrator | 2026-02-05 00:43:06.236912 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-05 00:43:06.236920 | orchestrator | Thursday 05 February 2026 00:43:02 +0000 (0:00:00.138) 0:00:42.299 ***** 2026-02-05 00:43:06.236928 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.236935 | orchestrator | 2026-02-05 00:43:06.236943 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-05 00:43:06.236950 | orchestrator | Thursday 05 February 2026 00:43:02 +0000 (0:00:00.119) 0:00:42.418 ***** 2026-02-05 00:43:06.236957 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.236965 | orchestrator | 2026-02-05 00:43:06.236972 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-05 00:43:06.236979 | orchestrator | Thursday 05 February 2026 00:43:03 +0000 (0:00:00.122) 0:00:42.540 ***** 2026-02-05 00:43:06.236986 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.236992 | orchestrator | 2026-02-05 00:43:06.236999 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-05 00:43:06.237007 | orchestrator | Thursday 05 February 2026 00:43:03 +0000 (0:00:00.135) 0:00:42.676 ***** 2026-02-05 00:43:06.237074 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.237083 | orchestrator | 2026-02-05 00:43:06.237091 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-05 00:43:06.237099 | orchestrator | Thursday 05 February 2026 00:43:03 +0000 (0:00:00.118) 0:00:42.794 ***** 2026-02-05 00:43:06.237107 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.237115 | orchestrator | 2026-02-05 00:43:06.237122 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-05 00:43:06.237145 | orchestrator | Thursday 05 February 2026 00:43:03 +0000 (0:00:00.123) 0:00:42.918 ***** 2026-02-05 00:43:06.237155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:06.237165 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:06.237172 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.237181 | orchestrator | 2026-02-05 00:43:06.237189 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-05 00:43:06.237196 | orchestrator | Thursday 05 February 2026 00:43:03 +0000 (0:00:00.134) 0:00:43.053 ***** 2026-02-05 00:43:06.237204 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:06.237220 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:06.237228 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.237236 | orchestrator | 2026-02-05 00:43:06.237244 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-05 00:43:06.237252 | orchestrator | Thursday 05 February 2026 00:43:03 +0000 (0:00:00.128) 0:00:43.182 ***** 2026-02-05 00:43:06.237260 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:06.237268 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:06.237276 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.237283 | orchestrator | 2026-02-05 00:43:06.237291 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-05 00:43:06.237299 | orchestrator | Thursday 05 February 2026 00:43:03 +0000 (0:00:00.275) 0:00:43.457 ***** 2026-02-05 00:43:06.237307 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:06.237315 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:06.237323 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.237331 | orchestrator | 2026-02-05 00:43:06.237355 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-05 00:43:06.237363 | orchestrator | Thursday 05 February 2026 00:43:04 +0000 (0:00:00.153) 0:00:43.611 ***** 2026-02-05 00:43:06.237371 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:06.237379 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:06.237387 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.237395 | orchestrator | 2026-02-05 00:43:06.237403 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-05 00:43:06.237411 | orchestrator | Thursday 05 February 2026 00:43:04 +0000 (0:00:00.126) 0:00:43.738 ***** 2026-02-05 00:43:06.237419 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:06.237428 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:06.237436 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.237444 | orchestrator | 2026-02-05 00:43:06.237452 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-05 00:43:06.237460 | orchestrator | Thursday 05 February 2026 00:43:04 +0000 (0:00:00.127) 0:00:43.866 ***** 2026-02-05 00:43:06.237483 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:06.237491 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:06.237498 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.237505 | orchestrator | 2026-02-05 00:43:06.237512 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-05 00:43:06.237519 | orchestrator | Thursday 05 February 2026 00:43:04 +0000 (0:00:00.139) 0:00:44.005 ***** 2026-02-05 00:43:06.237526 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:06.237537 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:06.237548 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.237556 | orchestrator | 2026-02-05 00:43:06.237563 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-05 00:43:06.237570 | orchestrator | Thursday 05 February 2026 00:43:04 +0000 (0:00:00.126) 0:00:44.131 ***** 2026-02-05 00:43:06.237577 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:43:06.237585 | orchestrator | 2026-02-05 00:43:06.237592 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-05 00:43:06.237599 | orchestrator | Thursday 05 February 2026 00:43:05 +0000 (0:00:00.499) 0:00:44.630 ***** 2026-02-05 00:43:06.237606 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:43:06.237613 | orchestrator | 2026-02-05 00:43:06.237620 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-05 00:43:06.237627 | orchestrator | Thursday 05 February 2026 00:43:05 +0000 (0:00:00.513) 0:00:45.144 ***** 2026-02-05 00:43:06.237634 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:43:06.237641 | orchestrator | 2026-02-05 00:43:06.237648 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-05 00:43:06.237655 | orchestrator | Thursday 05 February 2026 00:43:05 +0000 (0:00:00.129) 0:00:45.274 ***** 2026-02-05 00:43:06.237663 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'vg_name': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'}) 2026-02-05 00:43:06.237672 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'vg_name': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'}) 2026-02-05 00:43:06.237679 | orchestrator | 2026-02-05 00:43:06.237686 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-05 00:43:06.237693 | orchestrator | Thursday 05 February 2026 00:43:05 +0000 (0:00:00.148) 0:00:45.422 ***** 2026-02-05 00:43:06.237700 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:06.237707 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:06.237713 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:06.237720 | orchestrator | 2026-02-05 00:43:06.237726 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-05 00:43:06.237733 | orchestrator | Thursday 05 February 2026 00:43:06 +0000 (0:00:00.139) 0:00:45.562 ***** 2026-02-05 00:43:06.237739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:06.237751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:11.512647 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:11.512764 | orchestrator | 2026-02-05 00:43:11.512779 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-05 00:43:11.512789 | orchestrator | Thursday 05 February 2026 00:43:06 +0000 (0:00:00.129) 0:00:45.691 ***** 2026-02-05 00:43:11.512796 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'})  2026-02-05 00:43:11.512839 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'})  2026-02-05 00:43:11.512847 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:11.512854 | orchestrator | 2026-02-05 00:43:11.512861 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-05 00:43:11.512886 | orchestrator | Thursday 05 February 2026 00:43:06 +0000 (0:00:00.139) 0:00:45.830 ***** 2026-02-05 00:43:11.512893 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 00:43:11.512899 | orchestrator |  "lvm_report": { 2026-02-05 00:43:11.512907 | orchestrator |  "lv": [ 2026-02-05 00:43:11.512914 | orchestrator |  { 2026-02-05 00:43:11.512920 | orchestrator |  "lv_name": "osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe", 2026-02-05 00:43:11.512928 | orchestrator |  "vg_name": "ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe" 2026-02-05 00:43:11.512934 | orchestrator |  }, 2026-02-05 00:43:11.512940 | orchestrator |  { 2026-02-05 00:43:11.512946 | orchestrator |  "lv_name": "osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5", 2026-02-05 00:43:11.512953 | orchestrator |  "vg_name": "ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5" 2026-02-05 00:43:11.512959 | orchestrator |  } 2026-02-05 00:43:11.512965 | orchestrator |  ], 2026-02-05 00:43:11.512971 | orchestrator |  "pv": [ 2026-02-05 00:43:11.512977 | orchestrator |  { 2026-02-05 00:43:11.512984 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-05 00:43:11.512991 | orchestrator |  "vg_name": "ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe" 2026-02-05 00:43:11.512998 | orchestrator |  }, 2026-02-05 00:43:11.513004 | orchestrator |  { 2026-02-05 00:43:11.513010 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-05 00:43:11.513017 | orchestrator |  "vg_name": "ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5" 2026-02-05 00:43:11.513024 | orchestrator |  } 2026-02-05 00:43:11.513031 | orchestrator |  ] 2026-02-05 00:43:11.513037 | orchestrator |  } 2026-02-05 00:43:11.513044 | orchestrator | } 2026-02-05 00:43:11.513051 | orchestrator | 2026-02-05 00:43:11.513057 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-05 00:43:11.513064 | orchestrator | 2026-02-05 00:43:11.513070 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 00:43:11.513077 | orchestrator | Thursday 05 February 2026 00:43:06 +0000 (0:00:00.382) 0:00:46.213 ***** 2026-02-05 00:43:11.513083 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-05 00:43:11.513090 | orchestrator | 2026-02-05 00:43:11.513097 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 00:43:11.513104 | orchestrator | Thursday 05 February 2026 00:43:06 +0000 (0:00:00.225) 0:00:46.438 ***** 2026-02-05 00:43:11.513110 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:43:11.513116 | orchestrator | 2026-02-05 00:43:11.513122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513128 | orchestrator | Thursday 05 February 2026 00:43:07 +0000 (0:00:00.208) 0:00:46.647 ***** 2026-02-05 00:43:11.513134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-05 00:43:11.513140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-05 00:43:11.513146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-05 00:43:11.513153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-05 00:43:11.513159 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-05 00:43:11.513164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-05 00:43:11.513170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-05 00:43:11.513176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-05 00:43:11.513182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-05 00:43:11.513188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-05 00:43:11.513201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-05 00:43:11.513207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-05 00:43:11.513214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-05 00:43:11.513221 | orchestrator | 2026-02-05 00:43:11.513228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513239 | orchestrator | Thursday 05 February 2026 00:43:07 +0000 (0:00:00.366) 0:00:47.013 ***** 2026-02-05 00:43:11.513249 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:11.513258 | orchestrator | 2026-02-05 00:43:11.513266 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513275 | orchestrator | Thursday 05 February 2026 00:43:07 +0000 (0:00:00.231) 0:00:47.245 ***** 2026-02-05 00:43:11.513284 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:11.513293 | orchestrator | 2026-02-05 00:43:11.513301 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513324 | orchestrator | Thursday 05 February 2026 00:43:07 +0000 (0:00:00.169) 0:00:47.414 ***** 2026-02-05 00:43:11.513331 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:11.513337 | orchestrator | 2026-02-05 00:43:11.513343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513349 | orchestrator | Thursday 05 February 2026 00:43:08 +0000 (0:00:00.174) 0:00:47.588 ***** 2026-02-05 00:43:11.513355 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:11.513360 | orchestrator | 2026-02-05 00:43:11.513366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513415 | orchestrator | Thursday 05 February 2026 00:43:08 +0000 (0:00:00.183) 0:00:47.772 ***** 2026-02-05 00:43:11.513423 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:11.513429 | orchestrator | 2026-02-05 00:43:11.513435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513441 | orchestrator | Thursday 05 February 2026 00:43:08 +0000 (0:00:00.459) 0:00:48.231 ***** 2026-02-05 00:43:11.513447 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:11.513453 | orchestrator | 2026-02-05 00:43:11.513459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513490 | orchestrator | Thursday 05 February 2026 00:43:08 +0000 (0:00:00.176) 0:00:48.407 ***** 2026-02-05 00:43:11.513496 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:11.513502 | orchestrator | 2026-02-05 00:43:11.513508 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513514 | orchestrator | Thursday 05 February 2026 00:43:09 +0000 (0:00:00.189) 0:00:48.596 ***** 2026-02-05 00:43:11.513521 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:11.513527 | orchestrator | 2026-02-05 00:43:11.513534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513541 | orchestrator | Thursday 05 February 2026 00:43:09 +0000 (0:00:00.173) 0:00:48.770 ***** 2026-02-05 00:43:11.513548 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa) 2026-02-05 00:43:11.513556 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa) 2026-02-05 00:43:11.513562 | orchestrator | 2026-02-05 00:43:11.513569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513576 | orchestrator | Thursday 05 February 2026 00:43:09 +0000 (0:00:00.375) 0:00:49.145 ***** 2026-02-05 00:43:11.513583 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9acd2af8-1818-4377-bd1d-628102e352cb) 2026-02-05 00:43:11.513590 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9acd2af8-1818-4377-bd1d-628102e352cb) 2026-02-05 00:43:11.513597 | orchestrator | 2026-02-05 00:43:11.513603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513628 | orchestrator | Thursday 05 February 2026 00:43:10 +0000 (0:00:00.370) 0:00:49.516 ***** 2026-02-05 00:43:11.513635 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_33f37d33-b22b-44c3-8624-6074b4bf08c3) 2026-02-05 00:43:11.513642 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_33f37d33-b22b-44c3-8624-6074b4bf08c3) 2026-02-05 00:43:11.513648 | orchestrator | 2026-02-05 00:43:11.513654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513660 | orchestrator | Thursday 05 February 2026 00:43:10 +0000 (0:00:00.397) 0:00:49.913 ***** 2026-02-05 00:43:11.513666 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7f67b6e9-f99c-4354-902d-31e3a3988722) 2026-02-05 00:43:11.513672 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7f67b6e9-f99c-4354-902d-31e3a3988722) 2026-02-05 00:43:11.513678 | orchestrator | 2026-02-05 00:43:11.513684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:11.513690 | orchestrator | Thursday 05 February 2026 00:43:10 +0000 (0:00:00.365) 0:00:50.279 ***** 2026-02-05 00:43:11.513696 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 00:43:11.513703 | orchestrator | 2026-02-05 00:43:11.513709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:11.513715 | orchestrator | Thursday 05 February 2026 00:43:11 +0000 (0:00:00.309) 0:00:50.588 ***** 2026-02-05 00:43:11.513722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-05 00:43:11.513729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-05 00:43:11.513736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-05 00:43:11.513742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-05 00:43:11.513748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-05 00:43:11.513755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-05 00:43:11.513761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-05 00:43:11.513768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-05 00:43:11.513775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-05 00:43:11.513781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-05 00:43:11.513788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-05 00:43:11.513804 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-05 00:43:19.709314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-05 00:43:19.709407 | orchestrator | 2026-02-05 00:43:19.709419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:19.709429 | orchestrator | Thursday 05 February 2026 00:43:11 +0000 (0:00:00.368) 0:00:50.957 ***** 2026-02-05 00:43:19.709438 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.709447 | orchestrator | 2026-02-05 00:43:19.709456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:19.709547 | orchestrator | Thursday 05 February 2026 00:43:11 +0000 (0:00:00.183) 0:00:51.140 ***** 2026-02-05 00:43:19.709556 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.709565 | orchestrator | 2026-02-05 00:43:19.709573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:19.709582 | orchestrator | Thursday 05 February 2026 00:43:12 +0000 (0:00:00.453) 0:00:51.594 ***** 2026-02-05 00:43:19.709590 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.709620 | orchestrator | 2026-02-05 00:43:19.709629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:19.709637 | orchestrator | Thursday 05 February 2026 00:43:12 +0000 (0:00:00.184) 0:00:51.778 ***** 2026-02-05 00:43:19.709645 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.709653 | orchestrator | 2026-02-05 00:43:19.709661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:19.709669 | orchestrator | Thursday 05 February 2026 00:43:12 +0000 (0:00:00.186) 0:00:51.965 ***** 2026-02-05 00:43:19.709676 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.709684 | orchestrator | 2026-02-05 00:43:19.709692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:19.709700 | orchestrator | Thursday 05 February 2026 00:43:12 +0000 (0:00:00.161) 0:00:52.126 ***** 2026-02-05 00:43:19.709708 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.709716 | orchestrator | 2026-02-05 00:43:19.709725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:19.709732 | orchestrator | Thursday 05 February 2026 00:43:12 +0000 (0:00:00.212) 0:00:52.338 ***** 2026-02-05 00:43:19.709741 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.709749 | orchestrator | 2026-02-05 00:43:19.709759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:19.709767 | orchestrator | Thursday 05 February 2026 00:43:13 +0000 (0:00:00.172) 0:00:52.511 ***** 2026-02-05 00:43:19.709775 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.709783 | orchestrator | 2026-02-05 00:43:19.709790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:19.709798 | orchestrator | Thursday 05 February 2026 00:43:13 +0000 (0:00:00.177) 0:00:52.688 ***** 2026-02-05 00:43:19.709806 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-05 00:43:19.709831 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-05 00:43:19.709840 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-05 00:43:19.709849 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-05 00:43:19.709856 | orchestrator | 2026-02-05 00:43:19.709864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:19.709872 | orchestrator | Thursday 05 February 2026 00:43:13 +0000 (0:00:00.619) 0:00:53.308 ***** 2026-02-05 00:43:19.709880 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.709888 | orchestrator | 2026-02-05 00:43:19.709896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:19.709905 | orchestrator | Thursday 05 February 2026 00:43:14 +0000 (0:00:00.191) 0:00:53.500 ***** 2026-02-05 00:43:19.709913 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.709922 | orchestrator | 2026-02-05 00:43:19.709930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:19.709938 | orchestrator | Thursday 05 February 2026 00:43:14 +0000 (0:00:00.165) 0:00:53.665 ***** 2026-02-05 00:43:19.709946 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.709953 | orchestrator | 2026-02-05 00:43:19.709961 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:19.709969 | orchestrator | Thursday 05 February 2026 00:43:14 +0000 (0:00:00.198) 0:00:53.864 ***** 2026-02-05 00:43:19.709977 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.709985 | orchestrator | 2026-02-05 00:43:19.709992 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-05 00:43:19.710000 | orchestrator | Thursday 05 February 2026 00:43:14 +0000 (0:00:00.182) 0:00:54.046 ***** 2026-02-05 00:43:19.710009 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.710089 | orchestrator | 2026-02-05 00:43:19.710097 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-05 00:43:19.710105 | orchestrator | Thursday 05 February 2026 00:43:14 +0000 (0:00:00.254) 0:00:54.301 ***** 2026-02-05 00:43:19.710112 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3edfc207-63bb-5e8f-b635-306c655bc02c'}}) 2026-02-05 00:43:19.710129 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '121c279b-9e45-54e8-9359-e1d452607edd'}}) 2026-02-05 00:43:19.710137 | orchestrator | 2026-02-05 00:43:19.710144 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-05 00:43:19.710151 | orchestrator | Thursday 05 February 2026 00:43:15 +0000 (0:00:00.179) 0:00:54.480 ***** 2026-02-05 00:43:19.710160 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'}) 2026-02-05 00:43:19.710169 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'}) 2026-02-05 00:43:19.710177 | orchestrator | 2026-02-05 00:43:19.710184 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-05 00:43:19.710207 | orchestrator | Thursday 05 February 2026 00:43:16 +0000 (0:00:01.863) 0:00:56.344 ***** 2026-02-05 00:43:19.710215 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:19.710224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:19.710232 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.710239 | orchestrator | 2026-02-05 00:43:19.710246 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-05 00:43:19.710254 | orchestrator | Thursday 05 February 2026 00:43:17 +0000 (0:00:00.141) 0:00:56.485 ***** 2026-02-05 00:43:19.710261 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'}) 2026-02-05 00:43:19.710269 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'}) 2026-02-05 00:43:19.710276 | orchestrator | 2026-02-05 00:43:19.710283 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-05 00:43:19.710290 | orchestrator | Thursday 05 February 2026 00:43:18 +0000 (0:00:01.343) 0:00:57.828 ***** 2026-02-05 00:43:19.710297 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:19.710304 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:19.710311 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.710318 | orchestrator | 2026-02-05 00:43:19.710325 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-05 00:43:19.710332 | orchestrator | Thursday 05 February 2026 00:43:18 +0000 (0:00:00.126) 0:00:57.955 ***** 2026-02-05 00:43:19.710339 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.710346 | orchestrator | 2026-02-05 00:43:19.710354 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-05 00:43:19.710361 | orchestrator | Thursday 05 February 2026 00:43:18 +0000 (0:00:00.124) 0:00:58.080 ***** 2026-02-05 00:43:19.710368 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:19.710381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:19.710389 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.710396 | orchestrator | 2026-02-05 00:43:19.710404 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-05 00:43:19.710411 | orchestrator | Thursday 05 February 2026 00:43:18 +0000 (0:00:00.137) 0:00:58.218 ***** 2026-02-05 00:43:19.710425 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.710432 | orchestrator | 2026-02-05 00:43:19.710440 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-05 00:43:19.710447 | orchestrator | Thursday 05 February 2026 00:43:18 +0000 (0:00:00.152) 0:00:58.370 ***** 2026-02-05 00:43:19.710454 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:19.710478 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:19.710486 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.710493 | orchestrator | 2026-02-05 00:43:19.710501 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-05 00:43:19.710508 | orchestrator | Thursday 05 February 2026 00:43:19 +0000 (0:00:00.139) 0:00:58.510 ***** 2026-02-05 00:43:19.710515 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.710523 | orchestrator | 2026-02-05 00:43:19.710530 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-05 00:43:19.710537 | orchestrator | Thursday 05 February 2026 00:43:19 +0000 (0:00:00.131) 0:00:58.641 ***** 2026-02-05 00:43:19.710544 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:19.710551 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:19.710559 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:19.710566 | orchestrator | 2026-02-05 00:43:19.710574 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-05 00:43:19.710581 | orchestrator | Thursday 05 February 2026 00:43:19 +0000 (0:00:00.144) 0:00:58.785 ***** 2026-02-05 00:43:19.710589 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:43:19.710596 | orchestrator | 2026-02-05 00:43:19.710604 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-05 00:43:19.710611 | orchestrator | Thursday 05 February 2026 00:43:19 +0000 (0:00:00.249) 0:00:59.035 ***** 2026-02-05 00:43:19.710625 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:25.184050 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:25.184125 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184131 | orchestrator | 2026-02-05 00:43:25.184137 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-05 00:43:25.184143 | orchestrator | Thursday 05 February 2026 00:43:19 +0000 (0:00:00.128) 0:00:59.163 ***** 2026-02-05 00:43:25.184147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:25.184152 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:25.184156 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184160 | orchestrator | 2026-02-05 00:43:25.184164 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-05 00:43:25.184168 | orchestrator | Thursday 05 February 2026 00:43:19 +0000 (0:00:00.136) 0:00:59.300 ***** 2026-02-05 00:43:25.184172 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:25.184176 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:25.184195 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184199 | orchestrator | 2026-02-05 00:43:25.184203 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-05 00:43:25.184207 | orchestrator | Thursday 05 February 2026 00:43:19 +0000 (0:00:00.128) 0:00:59.428 ***** 2026-02-05 00:43:25.184210 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184214 | orchestrator | 2026-02-05 00:43:25.184218 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-05 00:43:25.184221 | orchestrator | Thursday 05 February 2026 00:43:20 +0000 (0:00:00.125) 0:00:59.554 ***** 2026-02-05 00:43:25.184225 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184229 | orchestrator | 2026-02-05 00:43:25.184232 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-05 00:43:25.184236 | orchestrator | Thursday 05 February 2026 00:43:20 +0000 (0:00:00.130) 0:00:59.685 ***** 2026-02-05 00:43:25.184240 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184244 | orchestrator | 2026-02-05 00:43:25.184248 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-05 00:43:25.184251 | orchestrator | Thursday 05 February 2026 00:43:20 +0000 (0:00:00.107) 0:00:59.793 ***** 2026-02-05 00:43:25.184255 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 00:43:25.184260 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-05 00:43:25.184264 | orchestrator | } 2026-02-05 00:43:25.184268 | orchestrator | 2026-02-05 00:43:25.184272 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-05 00:43:25.184276 | orchestrator | Thursday 05 February 2026 00:43:20 +0000 (0:00:00.116) 0:00:59.909 ***** 2026-02-05 00:43:25.184280 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 00:43:25.184283 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-05 00:43:25.184287 | orchestrator | } 2026-02-05 00:43:25.184291 | orchestrator | 2026-02-05 00:43:25.184295 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-05 00:43:25.184299 | orchestrator | Thursday 05 February 2026 00:43:20 +0000 (0:00:00.135) 0:01:00.045 ***** 2026-02-05 00:43:25.184302 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 00:43:25.184306 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-05 00:43:25.184310 | orchestrator | } 2026-02-05 00:43:25.184314 | orchestrator | 2026-02-05 00:43:25.184318 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-05 00:43:25.184321 | orchestrator | Thursday 05 February 2026 00:43:20 +0000 (0:00:00.131) 0:01:00.177 ***** 2026-02-05 00:43:25.184325 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:43:25.184329 | orchestrator | 2026-02-05 00:43:25.184333 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-05 00:43:25.184336 | orchestrator | Thursday 05 February 2026 00:43:21 +0000 (0:00:00.536) 0:01:00.713 ***** 2026-02-05 00:43:25.184340 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:43:25.184344 | orchestrator | 2026-02-05 00:43:25.184348 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-05 00:43:25.184351 | orchestrator | Thursday 05 February 2026 00:43:21 +0000 (0:00:00.485) 0:01:01.199 ***** 2026-02-05 00:43:25.184355 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:43:25.184359 | orchestrator | 2026-02-05 00:43:25.184362 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-05 00:43:25.184366 | orchestrator | Thursday 05 February 2026 00:43:22 +0000 (0:00:00.665) 0:01:01.864 ***** 2026-02-05 00:43:25.184370 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:43:25.184373 | orchestrator | 2026-02-05 00:43:25.184377 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-05 00:43:25.184381 | orchestrator | Thursday 05 February 2026 00:43:22 +0000 (0:00:00.135) 0:01:02.000 ***** 2026-02-05 00:43:25.184385 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184388 | orchestrator | 2026-02-05 00:43:25.184392 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-05 00:43:25.184400 | orchestrator | Thursday 05 February 2026 00:43:22 +0000 (0:00:00.091) 0:01:02.091 ***** 2026-02-05 00:43:25.184404 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184408 | orchestrator | 2026-02-05 00:43:25.184412 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-05 00:43:25.184427 | orchestrator | Thursday 05 February 2026 00:43:22 +0000 (0:00:00.098) 0:01:02.190 ***** 2026-02-05 00:43:25.184431 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 00:43:25.184435 | orchestrator |  "vgs_report": { 2026-02-05 00:43:25.184439 | orchestrator |  "vg": [] 2026-02-05 00:43:25.184453 | orchestrator |  } 2026-02-05 00:43:25.184493 | orchestrator | } 2026-02-05 00:43:25.184499 | orchestrator | 2026-02-05 00:43:25.184503 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-05 00:43:25.184507 | orchestrator | Thursday 05 February 2026 00:43:22 +0000 (0:00:00.118) 0:01:02.308 ***** 2026-02-05 00:43:25.184511 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184515 | orchestrator | 2026-02-05 00:43:25.184518 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-05 00:43:25.184522 | orchestrator | Thursday 05 February 2026 00:43:22 +0000 (0:00:00.114) 0:01:02.423 ***** 2026-02-05 00:43:25.184526 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184530 | orchestrator | 2026-02-05 00:43:25.184533 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-05 00:43:25.184537 | orchestrator | Thursday 05 February 2026 00:43:23 +0000 (0:00:00.117) 0:01:02.541 ***** 2026-02-05 00:43:25.184541 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184545 | orchestrator | 2026-02-05 00:43:25.184548 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-05 00:43:25.184552 | orchestrator | Thursday 05 February 2026 00:43:23 +0000 (0:00:00.128) 0:01:02.669 ***** 2026-02-05 00:43:25.184556 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184559 | orchestrator | 2026-02-05 00:43:25.184563 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-05 00:43:25.184567 | orchestrator | Thursday 05 February 2026 00:43:23 +0000 (0:00:00.125) 0:01:02.794 ***** 2026-02-05 00:43:25.184571 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184575 | orchestrator | 2026-02-05 00:43:25.184578 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-05 00:43:25.184582 | orchestrator | Thursday 05 February 2026 00:43:23 +0000 (0:00:00.111) 0:01:02.905 ***** 2026-02-05 00:43:25.184586 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184589 | orchestrator | 2026-02-05 00:43:25.184593 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-05 00:43:25.184597 | orchestrator | Thursday 05 February 2026 00:43:23 +0000 (0:00:00.127) 0:01:03.032 ***** 2026-02-05 00:43:25.184600 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184604 | orchestrator | 2026-02-05 00:43:25.184608 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-05 00:43:25.184612 | orchestrator | Thursday 05 February 2026 00:43:23 +0000 (0:00:00.132) 0:01:03.165 ***** 2026-02-05 00:43:25.184617 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184621 | orchestrator | 2026-02-05 00:43:25.184626 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-05 00:43:25.184630 | orchestrator | Thursday 05 February 2026 00:43:23 +0000 (0:00:00.270) 0:01:03.436 ***** 2026-02-05 00:43:25.184634 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184639 | orchestrator | 2026-02-05 00:43:25.184647 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-05 00:43:25.184651 | orchestrator | Thursday 05 February 2026 00:43:24 +0000 (0:00:00.150) 0:01:03.587 ***** 2026-02-05 00:43:25.184656 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184660 | orchestrator | 2026-02-05 00:43:25.184665 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-05 00:43:25.184670 | orchestrator | Thursday 05 February 2026 00:43:24 +0000 (0:00:00.129) 0:01:03.716 ***** 2026-02-05 00:43:25.184678 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184683 | orchestrator | 2026-02-05 00:43:25.184687 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-05 00:43:25.184692 | orchestrator | Thursday 05 February 2026 00:43:24 +0000 (0:00:00.125) 0:01:03.841 ***** 2026-02-05 00:43:25.184696 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184700 | orchestrator | 2026-02-05 00:43:25.184705 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-05 00:43:25.184709 | orchestrator | Thursday 05 February 2026 00:43:24 +0000 (0:00:00.129) 0:01:03.970 ***** 2026-02-05 00:43:25.184714 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184718 | orchestrator | 2026-02-05 00:43:25.184722 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-05 00:43:25.184726 | orchestrator | Thursday 05 February 2026 00:43:24 +0000 (0:00:00.112) 0:01:04.082 ***** 2026-02-05 00:43:25.184731 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184735 | orchestrator | 2026-02-05 00:43:25.184740 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-05 00:43:25.184744 | orchestrator | Thursday 05 February 2026 00:43:24 +0000 (0:00:00.120) 0:01:04.203 ***** 2026-02-05 00:43:25.184749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:25.184753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:25.184758 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184762 | orchestrator | 2026-02-05 00:43:25.184766 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-05 00:43:25.184771 | orchestrator | Thursday 05 February 2026 00:43:24 +0000 (0:00:00.148) 0:01:04.351 ***** 2026-02-05 00:43:25.184775 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:25.184780 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:25.184784 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:25.184789 | orchestrator | 2026-02-05 00:43:25.184793 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-05 00:43:25.184799 | orchestrator | Thursday 05 February 2026 00:43:25 +0000 (0:00:00.146) 0:01:04.498 ***** 2026-02-05 00:43:25.184810 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:28.004038 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:28.004171 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:28.004187 | orchestrator | 2026-02-05 00:43:28.004200 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-05 00:43:28.004213 | orchestrator | Thursday 05 February 2026 00:43:25 +0000 (0:00:00.139) 0:01:04.637 ***** 2026-02-05 00:43:28.004225 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:28.004237 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:28.004248 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:28.004259 | orchestrator | 2026-02-05 00:43:28.004270 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-05 00:43:28.004281 | orchestrator | Thursday 05 February 2026 00:43:25 +0000 (0:00:00.151) 0:01:04.789 ***** 2026-02-05 00:43:28.004321 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:28.004333 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:28.004344 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:28.004355 | orchestrator | 2026-02-05 00:43:28.004366 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-05 00:43:28.004377 | orchestrator | Thursday 05 February 2026 00:43:25 +0000 (0:00:00.144) 0:01:04.933 ***** 2026-02-05 00:43:28.004388 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:28.004399 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:28.004425 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:28.004436 | orchestrator | 2026-02-05 00:43:28.004447 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-05 00:43:28.004521 | orchestrator | Thursday 05 February 2026 00:43:25 +0000 (0:00:00.284) 0:01:05.217 ***** 2026-02-05 00:43:28.004533 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:28.004545 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:28.004556 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:28.004567 | orchestrator | 2026-02-05 00:43:28.004578 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-05 00:43:28.004589 | orchestrator | Thursday 05 February 2026 00:43:25 +0000 (0:00:00.143) 0:01:05.361 ***** 2026-02-05 00:43:28.004599 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:28.004610 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:28.004621 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:28.004632 | orchestrator | 2026-02-05 00:43:28.004643 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-05 00:43:28.004653 | orchestrator | Thursday 05 February 2026 00:43:26 +0000 (0:00:00.140) 0:01:05.501 ***** 2026-02-05 00:43:28.004664 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:43:28.004676 | orchestrator | 2026-02-05 00:43:28.004687 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-05 00:43:28.004698 | orchestrator | Thursday 05 February 2026 00:43:26 +0000 (0:00:00.547) 0:01:06.049 ***** 2026-02-05 00:43:28.004709 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:43:28.004719 | orchestrator | 2026-02-05 00:43:28.004730 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-05 00:43:28.004741 | orchestrator | Thursday 05 February 2026 00:43:27 +0000 (0:00:00.516) 0:01:06.565 ***** 2026-02-05 00:43:28.004752 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:43:28.004762 | orchestrator | 2026-02-05 00:43:28.004773 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-05 00:43:28.004784 | orchestrator | Thursday 05 February 2026 00:43:27 +0000 (0:00:00.145) 0:01:06.711 ***** 2026-02-05 00:43:28.004795 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'vg_name': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'}) 2026-02-05 00:43:28.004807 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'vg_name': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'}) 2026-02-05 00:43:28.004825 | orchestrator | 2026-02-05 00:43:28.004836 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-05 00:43:28.004847 | orchestrator | Thursday 05 February 2026 00:43:27 +0000 (0:00:00.161) 0:01:06.872 ***** 2026-02-05 00:43:28.004884 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:28.004905 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:28.004924 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:28.004943 | orchestrator | 2026-02-05 00:43:28.004963 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-05 00:43:28.004983 | orchestrator | Thursday 05 February 2026 00:43:27 +0000 (0:00:00.149) 0:01:07.021 ***** 2026-02-05 00:43:28.005004 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:28.005017 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:28.005028 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:28.005039 | orchestrator | 2026-02-05 00:43:28.005050 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-05 00:43:28.005061 | orchestrator | Thursday 05 February 2026 00:43:27 +0000 (0:00:00.135) 0:01:07.156 ***** 2026-02-05 00:43:28.005071 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'})  2026-02-05 00:43:28.005082 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'})  2026-02-05 00:43:28.005093 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:28.005103 | orchestrator | 2026-02-05 00:43:28.005114 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-05 00:43:28.005125 | orchestrator | Thursday 05 February 2026 00:43:27 +0000 (0:00:00.140) 0:01:07.297 ***** 2026-02-05 00:43:28.005136 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 00:43:28.005146 | orchestrator |  "lvm_report": { 2026-02-05 00:43:28.005158 | orchestrator |  "lv": [ 2026-02-05 00:43:28.005169 | orchestrator |  { 2026-02-05 00:43:28.005180 | orchestrator |  "lv_name": "osd-block-121c279b-9e45-54e8-9359-e1d452607edd", 2026-02-05 00:43:28.005198 | orchestrator |  "vg_name": "ceph-121c279b-9e45-54e8-9359-e1d452607edd" 2026-02-05 00:43:28.005210 | orchestrator |  }, 2026-02-05 00:43:28.005220 | orchestrator |  { 2026-02-05 00:43:28.005231 | orchestrator |  "lv_name": "osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c", 2026-02-05 00:43:28.005243 | orchestrator |  "vg_name": "ceph-3edfc207-63bb-5e8f-b635-306c655bc02c" 2026-02-05 00:43:28.005253 | orchestrator |  } 2026-02-05 00:43:28.005264 | orchestrator |  ], 2026-02-05 00:43:28.005275 | orchestrator |  "pv": [ 2026-02-05 00:43:28.005286 | orchestrator |  { 2026-02-05 00:43:28.005296 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-05 00:43:28.005307 | orchestrator |  "vg_name": "ceph-3edfc207-63bb-5e8f-b635-306c655bc02c" 2026-02-05 00:43:28.005318 | orchestrator |  }, 2026-02-05 00:43:28.005329 | orchestrator |  { 2026-02-05 00:43:28.005340 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-05 00:43:28.005350 | orchestrator |  "vg_name": "ceph-121c279b-9e45-54e8-9359-e1d452607edd" 2026-02-05 00:43:28.005361 | orchestrator |  } 2026-02-05 00:43:28.005372 | orchestrator |  ] 2026-02-05 00:43:28.005383 | orchestrator |  } 2026-02-05 00:43:28.005394 | orchestrator | } 2026-02-05 00:43:28.005413 | orchestrator | 2026-02-05 00:43:28.005424 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:43:28.005435 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-05 00:43:28.005446 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-05 00:43:28.005479 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-05 00:43:28.005499 | orchestrator | 2026-02-05 00:43:28.005518 | orchestrator | 2026-02-05 00:43:28.005537 | orchestrator | 2026-02-05 00:43:28.005555 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:43:28.005573 | orchestrator | Thursday 05 February 2026 00:43:27 +0000 (0:00:00.142) 0:01:07.440 ***** 2026-02-05 00:43:28.005586 | orchestrator | =============================================================================== 2026-02-05 00:43:28.005597 | orchestrator | Create block VGs -------------------------------------------------------- 5.45s 2026-02-05 00:43:28.005608 | orchestrator | Create block LVs -------------------------------------------------------- 4.04s 2026-02-05 00:43:28.005619 | orchestrator | Add known partitions to the list of available block devices ------------- 1.75s 2026-02-05 00:43:28.005629 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.67s 2026-02-05 00:43:28.005640 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.66s 2026-02-05 00:43:28.005651 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2026-02-05 00:43:28.005662 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.56s 2026-02-05 00:43:28.005673 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.47s 2026-02-05 00:43:28.005693 | orchestrator | Add known links to the list of available block devices ------------------ 1.37s 2026-02-05 00:43:28.294906 | orchestrator | Print LVM report data --------------------------------------------------- 0.85s 2026-02-05 00:43:28.295005 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2026-02-05 00:43:28.295020 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-02-05 00:43:28.295032 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-02-05 00:43:28.295047 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-02-05 00:43:28.295066 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2026-02-05 00:43:28.295094 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-02-05 00:43:28.295116 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-02-05 00:43:28.295135 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2026-02-05 00:43:28.295154 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.64s 2026-02-05 00:43:28.295172 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2026-02-05 00:43:40.435986 | orchestrator | 2026-02-05 00:43:40 | INFO  | Task 8b00a386-bebd-4d91-86a6-cde1f77ee1c5 (facts) was prepared for execution. 2026-02-05 00:43:40.436063 | orchestrator | 2026-02-05 00:43:40 | INFO  | It takes a moment until task 8b00a386-bebd-4d91-86a6-cde1f77ee1c5 (facts) has been started and output is visible here. 2026-02-05 00:43:52.316084 | orchestrator | 2026-02-05 00:43:52.316197 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-05 00:43:52.316211 | orchestrator | 2026-02-05 00:43:52.316225 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-05 00:43:52.316238 | orchestrator | Thursday 05 February 2026 00:43:44 +0000 (0:00:00.257) 0:00:00.257 ***** 2026-02-05 00:43:52.316352 | orchestrator | ok: [testbed-manager] 2026-02-05 00:43:52.316372 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:43:52.316386 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:43:52.316400 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:43:52.316414 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:43:52.316429 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:43:52.316442 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:43:52.316518 | orchestrator | 2026-02-05 00:43:52.316533 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-05 00:43:52.316547 | orchestrator | Thursday 05 February 2026 00:43:45 +0000 (0:00:01.113) 0:00:01.371 ***** 2026-02-05 00:43:52.316563 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:43:52.316578 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:43:52.316592 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:43:52.316606 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:43:52.316620 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.316634 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:52.316647 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:52.316660 | orchestrator | 2026-02-05 00:43:52.316673 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 00:43:52.316687 | orchestrator | 2026-02-05 00:43:52.316700 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 00:43:52.316713 | orchestrator | Thursday 05 February 2026 00:43:46 +0000 (0:00:01.252) 0:00:02.623 ***** 2026-02-05 00:43:52.316726 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:43:52.316739 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:43:52.316752 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:43:52.316765 | orchestrator | ok: [testbed-manager] 2026-02-05 00:43:52.316778 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:43:52.316792 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:43:52.316804 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:43:52.316817 | orchestrator | 2026-02-05 00:43:52.316829 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-05 00:43:52.316843 | orchestrator | 2026-02-05 00:43:52.316856 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-05 00:43:52.316869 | orchestrator | Thursday 05 February 2026 00:43:51 +0000 (0:00:04.858) 0:00:07.482 ***** 2026-02-05 00:43:52.316882 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:43:52.316896 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:43:52.316910 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:43:52.316923 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:43:52.316936 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.316948 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:52.316961 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:43:52.316974 | orchestrator | 2026-02-05 00:43:52.316988 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:43:52.317001 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:43:52.317016 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:43:52.317029 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:43:52.317044 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:43:52.317057 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:43:52.317071 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:43:52.317083 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:43:52.317111 | orchestrator | 2026-02-05 00:43:52.317125 | orchestrator | 2026-02-05 00:43:52.317139 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:43:52.317152 | orchestrator | Thursday 05 February 2026 00:43:52 +0000 (0:00:00.430) 0:00:07.912 ***** 2026-02-05 00:43:52.317165 | orchestrator | =============================================================================== 2026-02-05 00:43:52.317178 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.86s 2026-02-05 00:43:52.317191 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2026-02-05 00:43:52.317258 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2026-02-05 00:43:52.317272 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.43s 2026-02-05 00:44:04.299110 | orchestrator | 2026-02-05 00:44:04 | INFO  | Task dbb2bf2a-b392-4f68-96b9-bb31b54fa61a (frr) was prepared for execution. 2026-02-05 00:44:04.299204 | orchestrator | 2026-02-05 00:44:04 | INFO  | It takes a moment until task dbb2bf2a-b392-4f68-96b9-bb31b54fa61a (frr) has been started and output is visible here. 2026-02-05 00:44:26.880645 | orchestrator | 2026-02-05 00:44:26.880763 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-05 00:44:26.880781 | orchestrator | 2026-02-05 00:44:26.880794 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-05 00:44:26.880827 | orchestrator | Thursday 05 February 2026 00:44:08 +0000 (0:00:00.207) 0:00:00.207 ***** 2026-02-05 00:44:26.880840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 00:44:26.880854 | orchestrator | 2026-02-05 00:44:26.880866 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-05 00:44:26.880878 | orchestrator | Thursday 05 February 2026 00:44:08 +0000 (0:00:00.203) 0:00:00.411 ***** 2026-02-05 00:44:26.880891 | orchestrator | changed: [testbed-manager] 2026-02-05 00:44:26.880904 | orchestrator | 2026-02-05 00:44:26.880917 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-05 00:44:26.880930 | orchestrator | Thursday 05 February 2026 00:44:09 +0000 (0:00:01.039) 0:00:01.450 ***** 2026-02-05 00:44:26.880948 | orchestrator | changed: [testbed-manager] 2026-02-05 00:44:26.880960 | orchestrator | 2026-02-05 00:44:26.880973 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-05 00:44:26.880986 | orchestrator | Thursday 05 February 2026 00:44:17 +0000 (0:00:08.390) 0:00:09.841 ***** 2026-02-05 00:44:26.880999 | orchestrator | ok: [testbed-manager] 2026-02-05 00:44:26.881013 | orchestrator | 2026-02-05 00:44:26.881025 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-05 00:44:26.881038 | orchestrator | Thursday 05 February 2026 00:44:18 +0000 (0:00:00.996) 0:00:10.837 ***** 2026-02-05 00:44:26.881050 | orchestrator | changed: [testbed-manager] 2026-02-05 00:44:26.881063 | orchestrator | 2026-02-05 00:44:26.881075 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-05 00:44:26.881087 | orchestrator | Thursday 05 February 2026 00:44:19 +0000 (0:00:00.869) 0:00:11.707 ***** 2026-02-05 00:44:26.881100 | orchestrator | ok: [testbed-manager] 2026-02-05 00:44:26.881114 | orchestrator | 2026-02-05 00:44:26.881128 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-05 00:44:26.881143 | orchestrator | Thursday 05 February 2026 00:44:20 +0000 (0:00:01.050) 0:00:12.757 ***** 2026-02-05 00:44:26.881156 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:44:26.881170 | orchestrator | 2026-02-05 00:44:26.881183 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-05 00:44:26.881197 | orchestrator | Thursday 05 February 2026 00:44:20 +0000 (0:00:00.139) 0:00:12.896 ***** 2026-02-05 00:44:26.881211 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:44:26.881250 | orchestrator | 2026-02-05 00:44:26.881264 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-05 00:44:26.881277 | orchestrator | Thursday 05 February 2026 00:44:20 +0000 (0:00:00.141) 0:00:13.038 ***** 2026-02-05 00:44:26.881290 | orchestrator | changed: [testbed-manager] 2026-02-05 00:44:26.881304 | orchestrator | 2026-02-05 00:44:26.881317 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-05 00:44:26.881330 | orchestrator | Thursday 05 February 2026 00:44:21 +0000 (0:00:00.886) 0:00:13.925 ***** 2026-02-05 00:44:26.881343 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-05 00:44:26.881356 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-05 00:44:26.881371 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-05 00:44:26.881385 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-05 00:44:26.881397 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-05 00:44:26.881411 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-05 00:44:26.881424 | orchestrator | 2026-02-05 00:44:26.881437 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-05 00:44:26.881485 | orchestrator | Thursday 05 February 2026 00:44:23 +0000 (0:00:02.034) 0:00:15.959 ***** 2026-02-05 00:44:26.881498 | orchestrator | ok: [testbed-manager] 2026-02-05 00:44:26.881511 | orchestrator | 2026-02-05 00:44:26.881523 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-05 00:44:26.881536 | orchestrator | Thursday 05 February 2026 00:44:25 +0000 (0:00:01.340) 0:00:17.300 ***** 2026-02-05 00:44:26.881548 | orchestrator | changed: [testbed-manager] 2026-02-05 00:44:26.881560 | orchestrator | 2026-02-05 00:44:26.881572 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:44:26.881585 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:44:26.881596 | orchestrator | 2026-02-05 00:44:26.881608 | orchestrator | 2026-02-05 00:44:26.881621 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:44:26.881633 | orchestrator | Thursday 05 February 2026 00:44:26 +0000 (0:00:01.399) 0:00:18.699 ***** 2026-02-05 00:44:26.881645 | orchestrator | =============================================================================== 2026-02-05 00:44:26.881657 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.39s 2026-02-05 00:44:26.881669 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.03s 2026-02-05 00:44:26.881680 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.40s 2026-02-05 00:44:26.881692 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.34s 2026-02-05 00:44:26.881704 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.05s 2026-02-05 00:44:26.881737 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.04s 2026-02-05 00:44:26.881750 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.00s 2026-02-05 00:44:26.881763 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.89s 2026-02-05 00:44:26.881775 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.87s 2026-02-05 00:44:26.881787 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-02-05 00:44:26.881798 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-02-05 00:44:26.881810 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-02-05 00:44:27.239435 | orchestrator | 2026-02-05 00:44:27.242292 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Feb 5 00:44:27 UTC 2026 2026-02-05 00:44:27.242335 | orchestrator | 2026-02-05 00:44:29.139818 | orchestrator | 2026-02-05 00:44:29 | INFO  | Collection nutshell is prepared for execution 2026-02-05 00:44:29.139921 | orchestrator | 2026-02-05 00:44:29 | INFO  | A [0] - dotfiles 2026-02-05 00:44:39.149536 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [0] - homer 2026-02-05 00:44:39.149644 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [0] - netdata 2026-02-05 00:44:39.149661 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [0] - openstackclient 2026-02-05 00:44:39.149673 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [0] - phpmyadmin 2026-02-05 00:44:39.149704 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [0] - common 2026-02-05 00:44:39.153622 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [1] -- loadbalancer 2026-02-05 00:44:39.153819 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [2] --- opensearch 2026-02-05 00:44:39.154279 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [2] --- mariadb-ng 2026-02-05 00:44:39.154613 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [3] ---- horizon 2026-02-05 00:44:39.155206 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [3] ---- keystone 2026-02-05 00:44:39.155433 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [4] ----- neutron 2026-02-05 00:44:39.155616 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [5] ------ wait-for-nova 2026-02-05 00:44:39.156046 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [6] ------- octavia 2026-02-05 00:44:39.158262 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [4] ----- barbican 2026-02-05 00:44:39.158706 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [4] ----- designate 2026-02-05 00:44:39.158757 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [4] ----- ironic 2026-02-05 00:44:39.158965 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [4] ----- placement 2026-02-05 00:44:39.159213 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [4] ----- magnum 2026-02-05 00:44:39.160051 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [1] -- openvswitch 2026-02-05 00:44:39.160224 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [2] --- ovn 2026-02-05 00:44:39.160628 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [1] -- memcached 2026-02-05 00:44:39.160932 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [1] -- redis 2026-02-05 00:44:39.161411 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [1] -- rabbitmq-ng 2026-02-05 00:44:39.161538 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [0] - kubernetes 2026-02-05 00:44:39.164476 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [1] -- kubeconfig 2026-02-05 00:44:39.164526 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [1] -- copy-kubeconfig 2026-02-05 00:44:39.164770 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [0] - ceph 2026-02-05 00:44:39.167158 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [1] -- ceph-pools 2026-02-05 00:44:39.167184 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [2] --- copy-ceph-keys 2026-02-05 00:44:39.167776 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [3] ---- cephclient 2026-02-05 00:44:39.167804 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-02-05 00:44:39.167815 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [4] ----- wait-for-keystone 2026-02-05 00:44:39.168105 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [5] ------ kolla-ceph-rgw 2026-02-05 00:44:39.168126 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [5] ------ glance 2026-02-05 00:44:39.168366 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [5] ------ cinder 2026-02-05 00:44:39.168686 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [5] ------ nova 2026-02-05 00:44:39.168910 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [4] ----- prometheus 2026-02-05 00:44:39.169133 | orchestrator | 2026-02-05 00:44:39 | INFO  | A [5] ------ grafana 2026-02-05 00:44:39.385914 | orchestrator | 2026-02-05 00:44:39 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-02-05 00:44:39.385990 | orchestrator | 2026-02-05 00:44:39 | INFO  | Tasks are running in the background 2026-02-05 00:44:42.554975 | orchestrator | 2026-02-05 00:44:42 | INFO  | No task IDs specified, wait for all currently running tasks 2026-02-05 00:44:44.663116 | orchestrator | 2026-02-05 00:44:44 | INFO  | Task c32e30e0-2c3c-4294-ac58-b2c48f7a57e7 is in state STARTED 2026-02-05 00:44:44.664085 | orchestrator | 2026-02-05 00:44:44 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:44:44.665926 | orchestrator | 2026-02-05 00:44:44 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:44:44.667147 | orchestrator | 2026-02-05 00:44:44 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:44:44.667581 | orchestrator | 2026-02-05 00:44:44 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:44:44.669538 | orchestrator | 2026-02-05 00:44:44 | INFO  | Task 6ab76478-d251-4869-9369-e87131b7df4d is in state STARTED 2026-02-05 00:44:44.671570 | orchestrator | 2026-02-05 00:44:44 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:44:44.672378 | orchestrator | 2026-02-05 00:44:44 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:44:44.672406 | orchestrator | 2026-02-05 00:44:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:44:47.749871 | orchestrator | 2026-02-05 00:44:47 | INFO  | Task c32e30e0-2c3c-4294-ac58-b2c48f7a57e7 is in state STARTED 2026-02-05 00:44:47.749957 | orchestrator | 2026-02-05 00:44:47 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:44:47.749964 | orchestrator | 2026-02-05 00:44:47 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:44:47.749969 | orchestrator | 2026-02-05 00:44:47 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:44:47.749974 | orchestrator | 2026-02-05 00:44:47 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:44:47.749979 | orchestrator | 2026-02-05 00:44:47 | INFO  | Task 6ab76478-d251-4869-9369-e87131b7df4d is in state STARTED 2026-02-05 00:44:47.749984 | orchestrator | 2026-02-05 00:44:47 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:44:47.749989 | orchestrator | 2026-02-05 00:44:47 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:44:47.749993 | orchestrator | 2026-02-05 00:44:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:44:50.772081 | orchestrator | 2026-02-05 00:44:50 | INFO  | Task c32e30e0-2c3c-4294-ac58-b2c48f7a57e7 is in state STARTED 2026-02-05 00:44:50.780932 | orchestrator | 2026-02-05 00:44:50 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:44:50.788851 | orchestrator | 2026-02-05 00:44:50 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:44:50.790873 | orchestrator | 2026-02-05 00:44:50 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:44:50.793296 | orchestrator | 2026-02-05 00:44:50 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:44:50.793361 | orchestrator | 2026-02-05 00:44:50 | INFO  | Task 6ab76478-d251-4869-9369-e87131b7df4d is in state SUCCESS 2026-02-05 00:44:50.793851 | orchestrator | 2026-02-05 00:44:50 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:44:50.794651 | orchestrator | 2026-02-05 00:44:50 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:44:50.794678 | orchestrator | 2026-02-05 00:44:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:44:53.864787 | orchestrator | 2026-02-05 00:44:53 | INFO  | Task c32e30e0-2c3c-4294-ac58-b2c48f7a57e7 is in state STARTED 2026-02-05 00:44:53.864908 | orchestrator | 2026-02-05 00:44:53 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:44:53.864934 | orchestrator | 2026-02-05 00:44:53 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:44:53.864955 | orchestrator | 2026-02-05 00:44:53 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:44:53.864974 | orchestrator | 2026-02-05 00:44:53 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:44:53.864991 | orchestrator | 2026-02-05 00:44:53 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:44:53.865003 | orchestrator | 2026-02-05 00:44:53 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:44:53.865014 | orchestrator | 2026-02-05 00:44:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:44:56.940200 | orchestrator | 2026-02-05 00:44:56 | INFO  | Task c32e30e0-2c3c-4294-ac58-b2c48f7a57e7 is in state STARTED 2026-02-05 00:44:56.940305 | orchestrator | 2026-02-05 00:44:56 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:44:56.940321 | orchestrator | 2026-02-05 00:44:56 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:44:56.940333 | orchestrator | 2026-02-05 00:44:56 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:44:56.940344 | orchestrator | 2026-02-05 00:44:56 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:44:56.940355 | orchestrator | 2026-02-05 00:44:56 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:44:56.940385 | orchestrator | 2026-02-05 00:44:56 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:44:56.940397 | orchestrator | 2026-02-05 00:44:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:44:59.920109 | orchestrator | 2026-02-05 00:44:59 | INFO  | Task c32e30e0-2c3c-4294-ac58-b2c48f7a57e7 is in state STARTED 2026-02-05 00:44:59.920626 | orchestrator | 2026-02-05 00:44:59 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:44:59.922301 | orchestrator | 2026-02-05 00:44:59 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:44:59.923262 | orchestrator | 2026-02-05 00:44:59 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:44:59.924676 | orchestrator | 2026-02-05 00:44:59 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:44:59.925901 | orchestrator | 2026-02-05 00:44:59 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:44:59.927645 | orchestrator | 2026-02-05 00:44:59 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:44:59.927682 | orchestrator | 2026-02-05 00:44:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:03.033942 | orchestrator | 2026-02-05 00:45:03 | INFO  | Task c32e30e0-2c3c-4294-ac58-b2c48f7a57e7 is in state STARTED 2026-02-05 00:45:03.034409 | orchestrator | 2026-02-05 00:45:03 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:03.035889 | orchestrator | 2026-02-05 00:45:03 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:03.043713 | orchestrator | 2026-02-05 00:45:03 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:03.043822 | orchestrator | 2026-02-05 00:45:03 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:45:03.043848 | orchestrator | 2026-02-05 00:45:03 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:45:03.043870 | orchestrator | 2026-02-05 00:45:03 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:03.043892 | orchestrator | 2026-02-05 00:45:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:06.092164 | orchestrator | 2026-02-05 00:45:06.092238 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:45:06.092245 | orchestrator | 2026-02-05 00:45:06.092250 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:45:06.092256 | orchestrator | Thursday 05 February 2026 00:42:38 +0000 (0:00:00.240) 0:00:00.240 ***** 2026-02-05 00:45:06.092261 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:45:06.092267 | orchestrator | 2026-02-05 00:45:06.092272 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:45:06.092277 | orchestrator | Thursday 05 February 2026 00:42:38 +0000 (0:00:00.105) 0:00:00.346 ***** 2026-02-05 00:45:06.092283 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-05 00:45:06.092288 | orchestrator | 2026-02-05 00:45:06.092292 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-05 00:45:06.092297 | orchestrator | 2026-02-05 00:45:06.092302 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 00:45:06.092306 | orchestrator | Thursday 05 February 2026 00:42:38 +0000 (0:00:00.126) 0:00:00.472 ***** 2026-02-05 00:45:06.092311 | orchestrator | included: /ansible/roles/opensearch/tasks/pull.yml for testbed-node-0 2026-02-05 00:45:06.092315 | orchestrator | 2026-02-05 00:45:06.092320 | orchestrator | TASK [service-images-pull : opensearch | Pull images] ************************** 2026-02-05 00:45:06.092325 | orchestrator | Thursday 05 February 2026 00:42:38 +0000 (0:00:00.178) 0:00:00.650 ***** 2026-02-05 00:45:06.092329 | orchestrator | changed: [testbed-node-0] => (item=opensearch) 2026-02-05 00:45:06.092335 | orchestrator | changed: [testbed-node-0] => (item=opensearch-dashboards) 2026-02-05 00:45:06.092340 | orchestrator | 2026-02-05 00:45:06.092345 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:45:06.092349 | orchestrator | testbed-node-0 : ok=4  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:45:06.092356 | orchestrator | 2026-02-05 00:45:06.092361 | orchestrator | 2026-02-05 00:45:06.092365 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:45:06.092370 | orchestrator | Thursday 05 February 2026 00:44:48 +0000 (0:02:09.650) 0:02:10.301 ***** 2026-02-05 00:45:06.092374 | orchestrator | =============================================================================== 2026-02-05 00:45:06.092379 | orchestrator | service-images-pull : opensearch | Pull images ------------------------ 129.65s 2026-02-05 00:45:06.092384 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.18s 2026-02-05 00:45:06.092388 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.13s 2026-02-05 00:45:06.092393 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.11s 2026-02-05 00:45:06.092416 | orchestrator | 2026-02-05 00:45:06.092421 | orchestrator | 2026-02-05 00:45:06.092425 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-02-05 00:45:06.092430 | orchestrator | 2026-02-05 00:45:06.092456 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-02-05 00:45:06.092461 | orchestrator | Thursday 05 February 2026 00:44:51 +0000 (0:00:00.649) 0:00:00.649 ***** 2026-02-05 00:45:06.092466 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:45:06.092471 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:45:06.092476 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:45:06.092480 | orchestrator | changed: [testbed-manager] 2026-02-05 00:45:06.092485 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:45:06.092490 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:45:06.092494 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:45:06.092499 | orchestrator | 2026-02-05 00:45:06.092503 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-02-05 00:45:06.092508 | orchestrator | Thursday 05 February 2026 00:44:55 +0000 (0:00:03.679) 0:00:04.329 ***** 2026-02-05 00:45:06.092513 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-05 00:45:06.092518 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-05 00:45:06.092522 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-05 00:45:06.092527 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-05 00:45:06.092531 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-05 00:45:06.092536 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-05 00:45:06.092541 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-05 00:45:06.092545 | orchestrator | 2026-02-05 00:45:06.092550 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-02-05 00:45:06.092555 | orchestrator | Thursday 05 February 2026 00:44:57 +0000 (0:00:01.625) 0:00:05.955 ***** 2026-02-05 00:45:06.092562 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:44:56.551561', 'end': '2026-02-05 00:44:56.555727', 'delta': '0:00:00.004166', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:45:06.092586 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:44:56.655456', 'end': '2026-02-05 00:44:56.667242', 'delta': '0:00:00.011786', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:45:06.092779 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:44:56.699369', 'end': '2026-02-05 00:44:56.707978', 'delta': '0:00:00.008609', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:45:06.092796 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:44:56.748552', 'end': '2026-02-05 00:44:56.756904', 'delta': '0:00:00.008352', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:45:06.092805 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:44:56.837468', 'end': '2026-02-05 00:44:56.843422', 'delta': '0:00:00.005954', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:45:06.092811 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:44:56.919631', 'end': '2026-02-05 00:44:56.930169', 'delta': '0:00:00.010538', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:45:06.092822 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:44:57.045087', 'end': '2026-02-05 00:44:57.051712', 'delta': '0:00:00.006625', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:45:06.092828 | orchestrator | 2026-02-05 00:45:06.092834 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-02-05 00:45:06.092840 | orchestrator | Thursday 05 February 2026 00:44:59 +0000 (0:00:01.982) 0:00:07.938 ***** 2026-02-05 00:45:06.092849 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-05 00:45:06.092855 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-05 00:45:06.092861 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-05 00:45:06.092866 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-05 00:45:06.092872 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-05 00:45:06.092878 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-05 00:45:06.092883 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-05 00:45:06.092889 | orchestrator | 2026-02-05 00:45:06.092894 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-02-05 00:45:06.092900 | orchestrator | Thursday 05 February 2026 00:45:01 +0000 (0:00:02.017) 0:00:09.955 ***** 2026-02-05 00:45:06.092906 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-02-05 00:45:06.092911 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-02-05 00:45:06.092917 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-02-05 00:45:06.092922 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-02-05 00:45:06.092928 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-02-05 00:45:06.092933 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-02-05 00:45:06.092939 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-02-05 00:45:06.092944 | orchestrator | 2026-02-05 00:45:06.092949 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:45:06.092955 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:45:06.092961 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:45:06.092967 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:45:06.092973 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:45:06.092981 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:45:06.092987 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:45:06.092992 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:45:06.092998 | orchestrator | 2026-02-05 00:45:06.093004 | orchestrator | 2026-02-05 00:45:06.093009 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:45:06.093014 | orchestrator | Thursday 05 February 2026 00:45:03 +0000 (0:00:02.712) 0:00:12.667 ***** 2026-02-05 00:45:06.093020 | orchestrator | =============================================================================== 2026-02-05 00:45:06.093025 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.68s 2026-02-05 00:45:06.093031 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.71s 2026-02-05 00:45:06.093036 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.02s 2026-02-05 00:45:06.093041 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.98s 2026-02-05 00:45:06.093047 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.63s 2026-02-05 00:45:06.093053 | orchestrator | 2026-02-05 00:45:06 | INFO  | Task c32e30e0-2c3c-4294-ac58-b2c48f7a57e7 is in state SUCCESS 2026-02-05 00:45:06.095629 | orchestrator | 2026-02-05 00:45:06 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:06.096486 | orchestrator | 2026-02-05 00:45:06 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:06.101605 | orchestrator | 2026-02-05 00:45:06 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:06.105971 | orchestrator | 2026-02-05 00:45:06 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:45:06.107108 | orchestrator | 2026-02-05 00:45:06 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:06.107134 | orchestrator | 2026-02-05 00:45:06 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:45:06.107480 | orchestrator | 2026-02-05 00:45:06 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:06.107496 | orchestrator | 2026-02-05 00:45:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:09.197167 | orchestrator | 2026-02-05 00:45:09 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:09.198814 | orchestrator | 2026-02-05 00:45:09 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:09.201612 | orchestrator | 2026-02-05 00:45:09 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:09.209633 | orchestrator | 2026-02-05 00:45:09 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:45:09.209686 | orchestrator | 2026-02-05 00:45:09 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:09.209693 | orchestrator | 2026-02-05 00:45:09 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:45:09.209897 | orchestrator | 2026-02-05 00:45:09 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:09.209919 | orchestrator | 2026-02-05 00:45:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:12.292712 | orchestrator | 2026-02-05 00:45:12 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:12.292814 | orchestrator | 2026-02-05 00:45:12 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:12.292829 | orchestrator | 2026-02-05 00:45:12 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:12.292842 | orchestrator | 2026-02-05 00:45:12 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:45:12.292853 | orchestrator | 2026-02-05 00:45:12 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:12.292864 | orchestrator | 2026-02-05 00:45:12 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:45:12.292875 | orchestrator | 2026-02-05 00:45:12 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:12.292887 | orchestrator | 2026-02-05 00:45:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:15.341182 | orchestrator | 2026-02-05 00:45:15 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:15.342386 | orchestrator | 2026-02-05 00:45:15 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:15.343575 | orchestrator | 2026-02-05 00:45:15 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:15.344177 | orchestrator | 2026-02-05 00:45:15 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:45:15.344836 | orchestrator | 2026-02-05 00:45:15 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:15.345653 | orchestrator | 2026-02-05 00:45:15 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:45:15.346651 | orchestrator | 2026-02-05 00:45:15 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:15.346704 | orchestrator | 2026-02-05 00:45:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:18.391360 | orchestrator | 2026-02-05 00:45:18 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:18.394724 | orchestrator | 2026-02-05 00:45:18 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:18.395314 | orchestrator | 2026-02-05 00:45:18 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:18.396186 | orchestrator | 2026-02-05 00:45:18 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:45:18.396676 | orchestrator | 2026-02-05 00:45:18 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:18.397309 | orchestrator | 2026-02-05 00:45:18 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:45:18.397998 | orchestrator | 2026-02-05 00:45:18 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:18.398343 | orchestrator | 2026-02-05 00:45:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:21.519788 | orchestrator | 2026-02-05 00:45:21 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:21.519890 | orchestrator | 2026-02-05 00:45:21 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:21.519896 | orchestrator | 2026-02-05 00:45:21 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:21.519901 | orchestrator | 2026-02-05 00:45:21 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:45:21.519906 | orchestrator | 2026-02-05 00:45:21 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:21.519910 | orchestrator | 2026-02-05 00:45:21 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:45:21.519914 | orchestrator | 2026-02-05 00:45:21 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:21.519919 | orchestrator | 2026-02-05 00:45:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:24.483617 | orchestrator | 2026-02-05 00:45:24 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:24.483674 | orchestrator | 2026-02-05 00:45:24 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:24.489183 | orchestrator | 2026-02-05 00:45:24 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:24.489240 | orchestrator | 2026-02-05 00:45:24 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:45:24.489248 | orchestrator | 2026-02-05 00:45:24 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:24.489255 | orchestrator | 2026-02-05 00:45:24 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:45:24.489261 | orchestrator | 2026-02-05 00:45:24 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:24.489267 | orchestrator | 2026-02-05 00:45:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:27.553257 | orchestrator | 2026-02-05 00:45:27 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:27.553348 | orchestrator | 2026-02-05 00:45:27 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:27.553395 | orchestrator | 2026-02-05 00:45:27 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:27.553418 | orchestrator | 2026-02-05 00:45:27 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:45:27.555498 | orchestrator | 2026-02-05 00:45:27 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:27.557122 | orchestrator | 2026-02-05 00:45:27 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state STARTED 2026-02-05 00:45:27.557751 | orchestrator | 2026-02-05 00:45:27 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:27.558986 | orchestrator | 2026-02-05 00:45:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:30.597665 | orchestrator | 2026-02-05 00:45:30 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:30.597811 | orchestrator | 2026-02-05 00:45:30 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:30.597946 | orchestrator | 2026-02-05 00:45:30 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:30.597968 | orchestrator | 2026-02-05 00:45:30 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:45:30.629673 | orchestrator | 2026-02-05 00:45:30 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:30.629769 | orchestrator | 2026-02-05 00:45:30 | INFO  | Task 3352d630-dbc2-484c-9a94-50ad9f39e540 is in state SUCCESS 2026-02-05 00:45:30.629783 | orchestrator | 2026-02-05 00:45:30 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:30.629796 | orchestrator | 2026-02-05 00:45:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:33.640876 | orchestrator | 2026-02-05 00:45:33 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:33.641273 | orchestrator | 2026-02-05 00:45:33 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:33.643461 | orchestrator | 2026-02-05 00:45:33 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:33.644269 | orchestrator | 2026-02-05 00:45:33 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state STARTED 2026-02-05 00:45:33.644986 | orchestrator | 2026-02-05 00:45:33 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:33.648792 | orchestrator | 2026-02-05 00:45:33 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:33.648968 | orchestrator | 2026-02-05 00:45:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:36.686600 | orchestrator | 2026-02-05 00:45:36 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:36.686708 | orchestrator | 2026-02-05 00:45:36 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:36.689031 | orchestrator | 2026-02-05 00:45:36 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:36.689309 | orchestrator | 2026-02-05 00:45:36 | INFO  | Task 92a580ed-b381-441f-b629-47fff93822ef is in state SUCCESS 2026-02-05 00:45:36.690202 | orchestrator | 2026-02-05 00:45:36 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:36.690770 | orchestrator | 2026-02-05 00:45:36 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:36.691143 | orchestrator | 2026-02-05 00:45:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:39.785912 | orchestrator | 2026-02-05 00:45:39 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:39.786702 | orchestrator | 2026-02-05 00:45:39 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:39.788823 | orchestrator | 2026-02-05 00:45:39 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:39.790930 | orchestrator | 2026-02-05 00:45:39 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:39.791801 | orchestrator | 2026-02-05 00:45:39 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:39.793280 | orchestrator | 2026-02-05 00:45:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:42.853870 | orchestrator | 2026-02-05 00:45:42 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:42.854006 | orchestrator | 2026-02-05 00:45:42 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:42.854666 | orchestrator | 2026-02-05 00:45:42 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:42.855517 | orchestrator | 2026-02-05 00:45:42 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:42.856106 | orchestrator | 2026-02-05 00:45:42 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:42.856129 | orchestrator | 2026-02-05 00:45:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:45.905323 | orchestrator | 2026-02-05 00:45:45 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:45.905418 | orchestrator | 2026-02-05 00:45:45 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:45.906420 | orchestrator | 2026-02-05 00:45:45 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:45.907798 | orchestrator | 2026-02-05 00:45:45 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:45.909527 | orchestrator | 2026-02-05 00:45:45 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:45.909558 | orchestrator | 2026-02-05 00:45:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:48.974490 | orchestrator | 2026-02-05 00:45:48 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:48.974767 | orchestrator | 2026-02-05 00:45:48 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:48.975690 | orchestrator | 2026-02-05 00:45:48 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:48.976593 | orchestrator | 2026-02-05 00:45:48 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:48.977724 | orchestrator | 2026-02-05 00:45:48 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:48.977765 | orchestrator | 2026-02-05 00:45:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:52.075864 | orchestrator | 2026-02-05 00:45:52 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:52.075942 | orchestrator | 2026-02-05 00:45:52 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:52.075949 | orchestrator | 2026-02-05 00:45:52 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:52.075953 | orchestrator | 2026-02-05 00:45:52 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:52.075975 | orchestrator | 2026-02-05 00:45:52 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:52.075980 | orchestrator | 2026-02-05 00:45:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:55.124924 | orchestrator | 2026-02-05 00:45:55 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:55.127928 | orchestrator | 2026-02-05 00:45:55 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:55.133938 | orchestrator | 2026-02-05 00:45:55 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:55.134001 | orchestrator | 2026-02-05 00:45:55 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:55.134007 | orchestrator | 2026-02-05 00:45:55 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:55.134044 | orchestrator | 2026-02-05 00:45:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:45:58.224494 | orchestrator | 2026-02-05 00:45:58 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:45:58.226949 | orchestrator | 2026-02-05 00:45:58 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:45:58.228198 | orchestrator | 2026-02-05 00:45:58 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:45:58.229695 | orchestrator | 2026-02-05 00:45:58 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:45:58.231582 | orchestrator | 2026-02-05 00:45:58 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:45:58.235581 | orchestrator | 2026-02-05 00:45:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:01.363624 | orchestrator | 2026-02-05 00:46:01 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:46:01.366013 | orchestrator | 2026-02-05 00:46:01 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:01.368826 | orchestrator | 2026-02-05 00:46:01 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:01.372493 | orchestrator | 2026-02-05 00:46:01 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:46:01.381606 | orchestrator | 2026-02-05 00:46:01 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:01.381689 | orchestrator | 2026-02-05 00:46:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:04.410372 | orchestrator | 2026-02-05 00:46:04 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:46:04.410534 | orchestrator | 2026-02-05 00:46:04 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:04.410551 | orchestrator | 2026-02-05 00:46:04 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:04.412374 | orchestrator | 2026-02-05 00:46:04 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:46:04.413383 | orchestrator | 2026-02-05 00:46:04 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:04.413535 | orchestrator | 2026-02-05 00:46:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:07.483328 | orchestrator | 2026-02-05 00:46:07 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state STARTED 2026-02-05 00:46:07.485418 | orchestrator | 2026-02-05 00:46:07 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:07.486143 | orchestrator | 2026-02-05 00:46:07 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:07.486778 | orchestrator | 2026-02-05 00:46:07 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:46:07.488215 | orchestrator | 2026-02-05 00:46:07 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:07.488232 | orchestrator | 2026-02-05 00:46:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:10.527532 | orchestrator | 2026-02-05 00:46:10 | INFO  | Task c29dd86b-d06e-4dec-8ae4-60e8e5d2bc69 is in state SUCCESS 2026-02-05 00:46:10.528881 | orchestrator | 2026-02-05 00:46:10.528924 | orchestrator | 2026-02-05 00:46:10.528932 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-02-05 00:46:10.528941 | orchestrator | 2026-02-05 00:46:10.528948 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-02-05 00:46:10.528955 | orchestrator | Thursday 05 February 2026 00:44:53 +0000 (0:00:00.762) 0:00:00.762 ***** 2026-02-05 00:46:10.528961 | orchestrator | ok: [testbed-manager] => { 2026-02-05 00:46:10.528971 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-02-05 00:46:10.528979 | orchestrator | } 2026-02-05 00:46:10.528986 | orchestrator | 2026-02-05 00:46:10.528993 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-02-05 00:46:10.529000 | orchestrator | Thursday 05 February 2026 00:44:53 +0000 (0:00:00.343) 0:00:01.105 ***** 2026-02-05 00:46:10.529007 | orchestrator | ok: [testbed-manager] 2026-02-05 00:46:10.529014 | orchestrator | 2026-02-05 00:46:10.529020 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-02-05 00:46:10.529026 | orchestrator | Thursday 05 February 2026 00:44:54 +0000 (0:00:01.144) 0:00:02.249 ***** 2026-02-05 00:46:10.529033 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-02-05 00:46:10.529040 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-02-05 00:46:10.529048 | orchestrator | 2026-02-05 00:46:10.529055 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-02-05 00:46:10.529063 | orchestrator | Thursday 05 February 2026 00:44:56 +0000 (0:00:01.946) 0:00:04.195 ***** 2026-02-05 00:46:10.529069 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.529076 | orchestrator | 2026-02-05 00:46:10.529082 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-02-05 00:46:10.529088 | orchestrator | Thursday 05 February 2026 00:44:59 +0000 (0:00:02.450) 0:00:06.645 ***** 2026-02-05 00:46:10.529095 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.529102 | orchestrator | 2026-02-05 00:46:10.529108 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-02-05 00:46:10.529114 | orchestrator | Thursday 05 February 2026 00:45:00 +0000 (0:00:01.165) 0:00:07.811 ***** 2026-02-05 00:46:10.529121 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-02-05 00:46:10.529127 | orchestrator | ok: [testbed-manager] 2026-02-05 00:46:10.529134 | orchestrator | 2026-02-05 00:46:10.529140 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-02-05 00:46:10.529146 | orchestrator | Thursday 05 February 2026 00:45:26 +0000 (0:00:25.992) 0:00:33.803 ***** 2026-02-05 00:46:10.529152 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.529159 | orchestrator | 2026-02-05 00:46:10.529165 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:46:10.529172 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:10.529180 | orchestrator | 2026-02-05 00:46:10.529184 | orchestrator | 2026-02-05 00:46:10.529189 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:46:10.529212 | orchestrator | Thursday 05 February 2026 00:45:28 +0000 (0:00:02.141) 0:00:35.945 ***** 2026-02-05 00:46:10.529216 | orchestrator | =============================================================================== 2026-02-05 00:46:10.529220 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.99s 2026-02-05 00:46:10.529224 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.45s 2026-02-05 00:46:10.529228 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.14s 2026-02-05 00:46:10.529232 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.95s 2026-02-05 00:46:10.529236 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.17s 2026-02-05 00:46:10.529240 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.14s 2026-02-05 00:46:10.529244 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.34s 2026-02-05 00:46:10.529248 | orchestrator | 2026-02-05 00:46:10.529252 | orchestrator | 2026-02-05 00:46:10.529255 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-05 00:46:10.529259 | orchestrator | 2026-02-05 00:46:10.529263 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-05 00:46:10.529267 | orchestrator | Thursday 05 February 2026 00:44:53 +0000 (0:00:00.573) 0:00:00.573 ***** 2026-02-05 00:46:10.529271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-05 00:46:10.529277 | orchestrator | 2026-02-05 00:46:10.529281 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-05 00:46:10.529285 | orchestrator | Thursday 05 February 2026 00:44:53 +0000 (0:00:00.245) 0:00:00.819 ***** 2026-02-05 00:46:10.529289 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-05 00:46:10.529293 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-05 00:46:10.529297 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-05 00:46:10.529301 | orchestrator | 2026-02-05 00:46:10.529305 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-05 00:46:10.529308 | orchestrator | Thursday 05 February 2026 00:44:54 +0000 (0:00:01.412) 0:00:02.231 ***** 2026-02-05 00:46:10.529312 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.529317 | orchestrator | 2026-02-05 00:46:10.529323 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-05 00:46:10.529329 | orchestrator | Thursday 05 February 2026 00:44:57 +0000 (0:00:02.362) 0:00:04.593 ***** 2026-02-05 00:46:10.529346 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-05 00:46:10.529353 | orchestrator | ok: [testbed-manager] 2026-02-05 00:46:10.529359 | orchestrator | 2026-02-05 00:46:10.529365 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-05 00:46:10.529371 | orchestrator | Thursday 05 February 2026 00:45:30 +0000 (0:00:33.314) 0:00:37.908 ***** 2026-02-05 00:46:10.529377 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.529383 | orchestrator | 2026-02-05 00:46:10.529390 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-05 00:46:10.529396 | orchestrator | Thursday 05 February 2026 00:45:31 +0000 (0:00:00.959) 0:00:38.867 ***** 2026-02-05 00:46:10.529403 | orchestrator | ok: [testbed-manager] 2026-02-05 00:46:10.529410 | orchestrator | 2026-02-05 00:46:10.529417 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-05 00:46:10.529444 | orchestrator | Thursday 05 February 2026 00:45:32 +0000 (0:00:01.072) 0:00:39.940 ***** 2026-02-05 00:46:10.529448 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.529452 | orchestrator | 2026-02-05 00:46:10.529456 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-05 00:46:10.529460 | orchestrator | Thursday 05 February 2026 00:45:34 +0000 (0:00:01.904) 0:00:41.844 ***** 2026-02-05 00:46:10.529470 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.529475 | orchestrator | 2026-02-05 00:46:10.529479 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-05 00:46:10.529484 | orchestrator | Thursday 05 February 2026 00:45:35 +0000 (0:00:00.784) 0:00:42.629 ***** 2026-02-05 00:46:10.529488 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.529493 | orchestrator | 2026-02-05 00:46:10.529497 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-05 00:46:10.529502 | orchestrator | Thursday 05 February 2026 00:45:35 +0000 (0:00:00.562) 0:00:43.192 ***** 2026-02-05 00:46:10.529506 | orchestrator | ok: [testbed-manager] 2026-02-05 00:46:10.529511 | orchestrator | 2026-02-05 00:46:10.529515 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:46:10.529520 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:10.529525 | orchestrator | 2026-02-05 00:46:10.529529 | orchestrator | 2026-02-05 00:46:10.529534 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:46:10.529539 | orchestrator | Thursday 05 February 2026 00:45:36 +0000 (0:00:00.451) 0:00:43.643 ***** 2026-02-05 00:46:10.529543 | orchestrator | =============================================================================== 2026-02-05 00:46:10.529548 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.31s 2026-02-05 00:46:10.529553 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.36s 2026-02-05 00:46:10.529557 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.90s 2026-02-05 00:46:10.529561 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.41s 2026-02-05 00:46:10.529569 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.07s 2026-02-05 00:46:10.529574 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.96s 2026-02-05 00:46:10.529578 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.78s 2026-02-05 00:46:10.529583 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.56s 2026-02-05 00:46:10.529588 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.45s 2026-02-05 00:46:10.529592 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.25s 2026-02-05 00:46:10.529597 | orchestrator | 2026-02-05 00:46:10.529601 | orchestrator | 2026-02-05 00:46:10.529606 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:46:10.529610 | orchestrator | 2026-02-05 00:46:10.529615 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:46:10.529619 | orchestrator | Thursday 05 February 2026 00:44:51 +0000 (0:00:00.897) 0:00:00.897 ***** 2026-02-05 00:46:10.529624 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-05 00:46:10.529629 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-05 00:46:10.529633 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-05 00:46:10.529638 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-05 00:46:10.529642 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-05 00:46:10.529646 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-05 00:46:10.529651 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-05 00:46:10.529655 | orchestrator | 2026-02-05 00:46:10.529660 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-05 00:46:10.529664 | orchestrator | 2026-02-05 00:46:10.529668 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-05 00:46:10.529673 | orchestrator | Thursday 05 February 2026 00:44:53 +0000 (0:00:01.975) 0:00:02.872 ***** 2026-02-05 00:46:10.529685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:46:10.529697 | orchestrator | 2026-02-05 00:46:10.529703 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-05 00:46:10.529710 | orchestrator | Thursday 05 February 2026 00:44:55 +0000 (0:00:01.620) 0:00:04.493 ***** 2026-02-05 00:46:10.529716 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:46:10.529723 | orchestrator | ok: [testbed-manager] 2026-02-05 00:46:10.529729 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:46:10.529736 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:46:10.529742 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:46:10.529754 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:46:10.529762 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:46:10.529769 | orchestrator | 2026-02-05 00:46:10.529775 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-05 00:46:10.529782 | orchestrator | Thursday 05 February 2026 00:44:57 +0000 (0:00:02.175) 0:00:06.668 ***** 2026-02-05 00:46:10.529786 | orchestrator | ok: [testbed-manager] 2026-02-05 00:46:10.529791 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:46:10.529796 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:46:10.529800 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:46:10.529804 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:46:10.529807 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:46:10.529811 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:46:10.529815 | orchestrator | 2026-02-05 00:46:10.529819 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-05 00:46:10.529823 | orchestrator | Thursday 05 February 2026 00:45:00 +0000 (0:00:03.314) 0:00:09.982 ***** 2026-02-05 00:46:10.529827 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:46:10.529831 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:46:10.529836 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:46:10.529842 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:46:10.529848 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:46:10.529854 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:46:10.529860 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.529866 | orchestrator | 2026-02-05 00:46:10.529873 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-05 00:46:10.529879 | orchestrator | Thursday 05 February 2026 00:45:04 +0000 (0:00:03.483) 0:00:13.466 ***** 2026-02-05 00:46:10.529885 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:46:10.529891 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:46:10.529899 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:46:10.529903 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.529907 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:46:10.529911 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:46:10.529914 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:46:10.529918 | orchestrator | 2026-02-05 00:46:10.529922 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-05 00:46:10.529926 | orchestrator | Thursday 05 February 2026 00:45:15 +0000 (0:00:11.335) 0:00:24.801 ***** 2026-02-05 00:46:10.529930 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:46:10.529934 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:46:10.529938 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:46:10.529941 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:46:10.529945 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:46:10.529949 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.529953 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:46:10.529957 | orchestrator | 2026-02-05 00:46:10.529961 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-05 00:46:10.529965 | orchestrator | Thursday 05 February 2026 00:45:48 +0000 (0:00:32.487) 0:00:57.289 ***** 2026-02-05 00:46:10.529969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:46:10.529980 | orchestrator | 2026-02-05 00:46:10.529986 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-05 00:46:10.529990 | orchestrator | Thursday 05 February 2026 00:45:49 +0000 (0:00:01.743) 0:00:59.032 ***** 2026-02-05 00:46:10.529994 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-05 00:46:10.529999 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-05 00:46:10.530003 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-05 00:46:10.530006 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-05 00:46:10.530010 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-05 00:46:10.530153 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-05 00:46:10.530161 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-05 00:46:10.530165 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-05 00:46:10.530169 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-05 00:46:10.530175 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-05 00:46:10.530182 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-05 00:46:10.530189 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-05 00:46:10.530196 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-05 00:46:10.530202 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-05 00:46:10.530209 | orchestrator | 2026-02-05 00:46:10.530216 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-05 00:46:10.530224 | orchestrator | Thursday 05 February 2026 00:45:55 +0000 (0:00:05.938) 0:01:04.971 ***** 2026-02-05 00:46:10.530230 | orchestrator | ok: [testbed-manager] 2026-02-05 00:46:10.530238 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:46:10.530244 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:46:10.530251 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:46:10.530258 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:46:10.530265 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:46:10.530271 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:46:10.530278 | orchestrator | 2026-02-05 00:46:10.530284 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-05 00:46:10.530291 | orchestrator | Thursday 05 February 2026 00:45:57 +0000 (0:00:01.603) 0:01:06.574 ***** 2026-02-05 00:46:10.530299 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.530306 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:46:10.530313 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:46:10.530319 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:46:10.530326 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:46:10.530333 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:46:10.530339 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:46:10.530345 | orchestrator | 2026-02-05 00:46:10.530351 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-05 00:46:10.530366 | orchestrator | Thursday 05 February 2026 00:45:58 +0000 (0:00:01.174) 0:01:07.748 ***** 2026-02-05 00:46:10.530372 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:46:10.530378 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:46:10.530384 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:46:10.530390 | orchestrator | ok: [testbed-manager] 2026-02-05 00:46:10.530396 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:46:10.530401 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:46:10.530407 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:46:10.530414 | orchestrator | 2026-02-05 00:46:10.530443 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-05 00:46:10.530450 | orchestrator | Thursday 05 February 2026 00:46:00 +0000 (0:00:01.555) 0:01:09.304 ***** 2026-02-05 00:46:10.530457 | orchestrator | ok: [testbed-manager] 2026-02-05 00:46:10.530463 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:46:10.530469 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:46:10.530487 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:46:10.530493 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:46:10.530500 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:46:10.530506 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:46:10.530512 | orchestrator | 2026-02-05 00:46:10.530519 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-05 00:46:10.530525 | orchestrator | Thursday 05 February 2026 00:46:02 +0000 (0:00:02.044) 0:01:11.348 ***** 2026-02-05 00:46:10.530531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-05 00:46:10.530540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:46:10.530547 | orchestrator | 2026-02-05 00:46:10.530553 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-05 00:46:10.530559 | orchestrator | Thursday 05 February 2026 00:46:03 +0000 (0:00:01.610) 0:01:12.959 ***** 2026-02-05 00:46:10.530565 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.530571 | orchestrator | 2026-02-05 00:46:10.530577 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-05 00:46:10.530583 | orchestrator | Thursday 05 February 2026 00:46:05 +0000 (0:00:01.940) 0:01:14.900 ***** 2026-02-05 00:46:10.530589 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:10.530595 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:46:10.530601 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:46:10.530608 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:46:10.530614 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:46:10.530620 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:46:10.530626 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:46:10.530633 | orchestrator | 2026-02-05 00:46:10.530641 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:46:10.530645 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:10.530649 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:10.530658 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:10.530663 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:10.530667 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:10.530670 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:10.530674 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:10.530678 | orchestrator | 2026-02-05 00:46:10.530682 | orchestrator | 2026-02-05 00:46:10.530686 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:46:10.530690 | orchestrator | Thursday 05 February 2026 00:46:08 +0000 (0:00:02.997) 0:01:17.897 ***** 2026-02-05 00:46:10.530694 | orchestrator | =============================================================================== 2026-02-05 00:46:10.530698 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 32.49s 2026-02-05 00:46:10.530702 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.34s 2026-02-05 00:46:10.530706 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.94s 2026-02-05 00:46:10.530714 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.48s 2026-02-05 00:46:10.530718 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.31s 2026-02-05 00:46:10.530722 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.00s 2026-02-05 00:46:10.530726 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.18s 2026-02-05 00:46:10.530729 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.04s 2026-02-05 00:46:10.530733 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.98s 2026-02-05 00:46:10.530737 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.94s 2026-02-05 00:46:10.530741 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.74s 2026-02-05 00:46:10.530750 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.62s 2026-02-05 00:46:10.530754 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.61s 2026-02-05 00:46:10.530758 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.60s 2026-02-05 00:46:10.530762 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.56s 2026-02-05 00:46:10.530766 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.17s 2026-02-05 00:46:10.530770 | orchestrator | 2026-02-05 00:46:10 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:10.530774 | orchestrator | 2026-02-05 00:46:10 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:10.532492 | orchestrator | 2026-02-05 00:46:10 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:46:10.532529 | orchestrator | 2026-02-05 00:46:10 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:10.532539 | orchestrator | 2026-02-05 00:46:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:13.561834 | orchestrator | 2026-02-05 00:46:13 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:13.562279 | orchestrator | 2026-02-05 00:46:13 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:13.562329 | orchestrator | 2026-02-05 00:46:13 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:46:13.562954 | orchestrator | 2026-02-05 00:46:13 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:13.563035 | orchestrator | 2026-02-05 00:46:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:16.602187 | orchestrator | 2026-02-05 00:46:16 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:16.603706 | orchestrator | 2026-02-05 00:46:16 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:16.604657 | orchestrator | 2026-02-05 00:46:16 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:46:16.605106 | orchestrator | 2026-02-05 00:46:16 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:16.606243 | orchestrator | 2026-02-05 00:46:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:19.638991 | orchestrator | 2026-02-05 00:46:19 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:19.640555 | orchestrator | 2026-02-05 00:46:19 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:19.641768 | orchestrator | 2026-02-05 00:46:19 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:46:19.643967 | orchestrator | 2026-02-05 00:46:19 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:19.645609 | orchestrator | 2026-02-05 00:46:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:22.687182 | orchestrator | 2026-02-05 00:46:22 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:22.688476 | orchestrator | 2026-02-05 00:46:22 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:22.689694 | orchestrator | 2026-02-05 00:46:22 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:46:22.691053 | orchestrator | 2026-02-05 00:46:22 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:22.691456 | orchestrator | 2026-02-05 00:46:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:25.734175 | orchestrator | 2026-02-05 00:46:25 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:25.734758 | orchestrator | 2026-02-05 00:46:25 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:25.735392 | orchestrator | 2026-02-05 00:46:25 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:46:25.737583 | orchestrator | 2026-02-05 00:46:25 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:25.737819 | orchestrator | 2026-02-05 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:28.779410 | orchestrator | 2026-02-05 00:46:28 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:28.779741 | orchestrator | 2026-02-05 00:46:28 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:28.781591 | orchestrator | 2026-02-05 00:46:28 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:46:28.781671 | orchestrator | 2026-02-05 00:46:28 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:28.781679 | orchestrator | 2026-02-05 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:31.829624 | orchestrator | 2026-02-05 00:46:31 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:31.830213 | orchestrator | 2026-02-05 00:46:31 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:31.831494 | orchestrator | 2026-02-05 00:46:31 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state STARTED 2026-02-05 00:46:31.832521 | orchestrator | 2026-02-05 00:46:31 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:31.832552 | orchestrator | 2026-02-05 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:34.879091 | orchestrator | 2026-02-05 00:46:34 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:34.882560 | orchestrator | 2026-02-05 00:46:34 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:34.883919 | orchestrator | 2026-02-05 00:46:34 | INFO  | Task 8897077e-aa92-4337-bfe8-86d031f80b02 is in state SUCCESS 2026-02-05 00:46:34.886324 | orchestrator | 2026-02-05 00:46:34 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:34.886352 | orchestrator | 2026-02-05 00:46:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:37.926822 | orchestrator | 2026-02-05 00:46:37 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:37.927432 | orchestrator | 2026-02-05 00:46:37 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:37.929269 | orchestrator | 2026-02-05 00:46:37 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:37.929780 | orchestrator | 2026-02-05 00:46:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:40.970905 | orchestrator | 2026-02-05 00:46:40 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:40.973113 | orchestrator | 2026-02-05 00:46:40 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:40.974278 | orchestrator | 2026-02-05 00:46:40 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:40.974304 | orchestrator | 2026-02-05 00:46:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:44.012142 | orchestrator | 2026-02-05 00:46:44 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:44.013870 | orchestrator | 2026-02-05 00:46:44 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:44.015615 | orchestrator | 2026-02-05 00:46:44 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:44.015867 | orchestrator | 2026-02-05 00:46:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:47.051815 | orchestrator | 2026-02-05 00:46:47 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:47.053855 | orchestrator | 2026-02-05 00:46:47 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:47.056160 | orchestrator | 2026-02-05 00:46:47 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:47.056619 | orchestrator | 2026-02-05 00:46:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:50.094371 | orchestrator | 2026-02-05 00:46:50 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:50.095817 | orchestrator | 2026-02-05 00:46:50 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:50.097888 | orchestrator | 2026-02-05 00:46:50 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:50.098513 | orchestrator | 2026-02-05 00:46:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:53.140009 | orchestrator | 2026-02-05 00:46:53 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state STARTED 2026-02-05 00:46:53.141599 | orchestrator | 2026-02-05 00:46:53 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:53.144234 | orchestrator | 2026-02-05 00:46:53 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:53.144286 | orchestrator | 2026-02-05 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:56.172779 | orchestrator | 2026-02-05 00:46:56 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:46:56.172878 | orchestrator | 2026-02-05 00:46:56 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:46:56.180773 | orchestrator | 2026-02-05 00:46:56 | INFO  | Task b43070b1-4bce-417f-93d5-023cd9cd1c6e is in state SUCCESS 2026-02-05 00:46:56.182289 | orchestrator | 2026-02-05 00:46:56.182371 | orchestrator | 2026-02-05 00:46:56.182382 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-02-05 00:46:56.182391 | orchestrator | 2026-02-05 00:46:56.182399 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-02-05 00:46:56.182408 | orchestrator | Thursday 05 February 2026 00:45:09 +0000 (0:00:00.236) 0:00:00.236 ***** 2026-02-05 00:46:56.182465 | orchestrator | ok: [testbed-manager] 2026-02-05 00:46:56.182494 | orchestrator | 2026-02-05 00:46:56.182503 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-02-05 00:46:56.182511 | orchestrator | Thursday 05 February 2026 00:45:10 +0000 (0:00:00.838) 0:00:01.075 ***** 2026-02-05 00:46:56.182519 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-02-05 00:46:56.182528 | orchestrator | 2026-02-05 00:46:56.182536 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-02-05 00:46:56.182544 | orchestrator | Thursday 05 February 2026 00:45:11 +0000 (0:00:00.505) 0:00:01.580 ***** 2026-02-05 00:46:56.182552 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:56.182559 | orchestrator | 2026-02-05 00:46:56.182567 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-02-05 00:46:56.182575 | orchestrator | Thursday 05 February 2026 00:45:12 +0000 (0:00:01.308) 0:00:02.889 ***** 2026-02-05 00:46:56.182582 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-02-05 00:46:56.182590 | orchestrator | ok: [testbed-manager] 2026-02-05 00:46:56.182598 | orchestrator | 2026-02-05 00:46:56.182605 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-02-05 00:46:56.182612 | orchestrator | Thursday 05 February 2026 00:46:26 +0000 (0:01:14.288) 0:01:17.178 ***** 2026-02-05 00:46:56.182620 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:56.182628 | orchestrator | 2026-02-05 00:46:56.182636 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:46:56.182644 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:56.182653 | orchestrator | 2026-02-05 00:46:56.182661 | orchestrator | 2026-02-05 00:46:56.182669 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:46:56.182676 | orchestrator | Thursday 05 February 2026 00:46:33 +0000 (0:00:06.448) 0:01:23.626 ***** 2026-02-05 00:46:56.182683 | orchestrator | =============================================================================== 2026-02-05 00:46:56.182698 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 74.29s 2026-02-05 00:46:56.182706 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 6.45s 2026-02-05 00:46:56.182714 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.31s 2026-02-05 00:46:56.182722 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.84s 2026-02-05 00:46:56.182729 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.51s 2026-02-05 00:46:56.182737 | orchestrator | 2026-02-05 00:46:56.182745 | orchestrator | 2026-02-05 00:46:56.182752 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-05 00:46:56.182761 | orchestrator | 2026-02-05 00:46:56.182768 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-05 00:46:56.182776 | orchestrator | Thursday 05 February 2026 00:44:43 +0000 (0:00:00.220) 0:00:00.220 ***** 2026-02-05 00:46:56.182785 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:46:56.182794 | orchestrator | 2026-02-05 00:46:56.182802 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-05 00:46:56.182811 | orchestrator | Thursday 05 February 2026 00:44:44 +0000 (0:00:01.080) 0:00:01.301 ***** 2026-02-05 00:46:56.182816 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:46:56.182821 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:46:56.182825 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:46:56.182830 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:46:56.182835 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:46:56.182846 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:46:56.182851 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:46:56.182857 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:46:56.182863 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:46:56.182868 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:46:56.182874 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:46:56.182880 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:46:56.182886 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:46:56.182891 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:46:56.182897 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:46:56.182903 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:46:56.182923 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:46:56.182931 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:46:56.182939 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:46:56.182947 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:46:56.182956 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:46:56.182964 | orchestrator | 2026-02-05 00:46:56.182972 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-05 00:46:56.182980 | orchestrator | Thursday 05 February 2026 00:44:48 +0000 (0:00:03.612) 0:00:04.913 ***** 2026-02-05 00:46:56.182988 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:46:56.182997 | orchestrator | 2026-02-05 00:46:56.183005 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-05 00:46:56.183013 | orchestrator | Thursday 05 February 2026 00:44:49 +0000 (0:00:01.273) 0:00:06.186 ***** 2026-02-05 00:46:56.183024 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.183036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.183041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.183051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.183056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.183076 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183082 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.183087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.183100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183119 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183128 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183139 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183152 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183175 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.183180 | orchestrator | 2026-02-05 00:46:56.183185 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-05 00:46:56.183193 | orchestrator | Thursday 05 February 2026 00:44:54 +0000 (0:00:04.259) 0:00:10.446 ***** 2026-02-05 00:46:56.183198 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.183204 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183209 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.183237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183258 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:46:56.183265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.183283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.183310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183331 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:46:56.183338 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:46:56.183346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.183354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183375 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:46:56.183384 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:46:56.183389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.183394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183408 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:46:56.183434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.183445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183532 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:46:56.183537 | orchestrator | 2026-02-05 00:46:56.183542 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-05 00:46:56.183547 | orchestrator | Thursday 05 February 2026 00:44:55 +0000 (0:00:01.469) 0:00:11.915 ***** 2026-02-05 00:46:56.183560 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.183624 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183637 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183653 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:46:56.183662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.183676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.183694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.183704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.184039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.184055 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:46:56.184060 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:46:56.184065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.184079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.184087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.184093 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:46:56.184098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.184103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.184108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.184113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.184123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.184132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.184137 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:46:56.184142 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:46:56.184147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:46:56.184155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.184160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.184165 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:46:56.184170 | orchestrator | 2026-02-05 00:46:56.184175 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-05 00:46:56.184181 | orchestrator | Thursday 05 February 2026 00:44:58 +0000 (0:00:03.239) 0:00:15.154 ***** 2026-02-05 00:46:56.184185 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:46:56.184190 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:46:56.184195 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:46:56.184200 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:46:56.184205 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:46:56.184210 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:46:56.184214 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:46:56.184219 | orchestrator | 2026-02-05 00:46:56.184224 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-05 00:46:56.184229 | orchestrator | Thursday 05 February 2026 00:44:59 +0000 (0:00:00.647) 0:00:15.802 ***** 2026-02-05 00:46:56.184234 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:46:56.184238 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:46:56.184243 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:46:56.184248 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:46:56.184253 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:46:56.184257 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:46:56.184262 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:46:56.184271 | orchestrator | 2026-02-05 00:46:56.184277 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-05 00:46:56.184285 | orchestrator | Thursday 05 February 2026 00:45:00 +0000 (0:00:01.046) 0:00:16.848 ***** 2026-02-05 00:46:56.184300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.184308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.184321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.184334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.184342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184358 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.184374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.184387 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.184396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184475 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184496 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184516 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184525 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184532 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.184539 | orchestrator | 2026-02-05 00:46:56.184547 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-05 00:46:56.184631 | orchestrator | Thursday 05 February 2026 00:45:07 +0000 (0:00:06.636) 0:00:23.485 ***** 2026-02-05 00:46:56.184647 | orchestrator | [WARNING]: Skipped 2026-02-05 00:46:56.184658 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-05 00:46:56.184667 | orchestrator | to this access issue: 2026-02-05 00:46:56.184675 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-05 00:46:56.184683 | orchestrator | directory 2026-02-05 00:46:56.184691 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 00:46:56.184699 | orchestrator | 2026-02-05 00:46:56.184707 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-05 00:46:56.184715 | orchestrator | Thursday 05 February 2026 00:45:09 +0000 (0:00:02.634) 0:00:26.119 ***** 2026-02-05 00:46:56.184724 | orchestrator | [WARNING]: Skipped 2026-02-05 00:46:56.184732 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-05 00:46:56.184739 | orchestrator | to this access issue: 2026-02-05 00:46:56.184747 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-05 00:46:56.184775 | orchestrator | directory 2026-02-05 00:46:56.184783 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 00:46:56.184791 | orchestrator | 2026-02-05 00:46:56.184798 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-05 00:46:56.184806 | orchestrator | Thursday 05 February 2026 00:45:10 +0000 (0:00:01.174) 0:00:27.294 ***** 2026-02-05 00:46:56.184814 | orchestrator | [WARNING]: Skipped 2026-02-05 00:46:56.184822 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-05 00:46:56.184829 | orchestrator | to this access issue: 2026-02-05 00:46:56.184837 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-05 00:46:56.184845 | orchestrator | directory 2026-02-05 00:46:56.184853 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 00:46:56.184861 | orchestrator | 2026-02-05 00:46:56.184875 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-05 00:46:56.184883 | orchestrator | Thursday 05 February 2026 00:45:11 +0000 (0:00:00.883) 0:00:28.177 ***** 2026-02-05 00:46:56.184891 | orchestrator | [WARNING]: Skipped 2026-02-05 00:46:56.184899 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-05 00:46:56.184907 | orchestrator | to this access issue: 2026-02-05 00:46:56.184915 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-05 00:46:56.184923 | orchestrator | directory 2026-02-05 00:46:56.184930 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 00:46:56.184937 | orchestrator | 2026-02-05 00:46:56.184945 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-05 00:46:56.184952 | orchestrator | Thursday 05 February 2026 00:45:12 +0000 (0:00:00.801) 0:00:28.979 ***** 2026-02-05 00:46:56.184959 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:46:56.184966 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:56.184974 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:46:56.184982 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:46:56.184989 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:46:56.184997 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:46:56.185005 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:46:56.185013 | orchestrator | 2026-02-05 00:46:56.185021 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-05 00:46:56.185028 | orchestrator | Thursday 05 February 2026 00:45:15 +0000 (0:00:03.288) 0:00:32.267 ***** 2026-02-05 00:46:56.185036 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:46:56.185044 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:46:56.185052 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:46:56.185068 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:46:56.185077 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:46:56.185085 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:46:56.185098 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:46:56.185106 | orchestrator | 2026-02-05 00:46:56.185114 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-05 00:46:56.185123 | orchestrator | Thursday 05 February 2026 00:45:18 +0000 (0:00:02.959) 0:00:35.226 ***** 2026-02-05 00:46:56.185131 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:56.185139 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:46:56.185147 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:46:56.185154 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:46:56.185163 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:46:56.185171 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:46:56.185244 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:46:56.185253 | orchestrator | 2026-02-05 00:46:56.185258 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-05 00:46:56.185275 | orchestrator | Thursday 05 February 2026 00:45:21 +0000 (0:00:02.465) 0:00:37.692 ***** 2026-02-05 00:46:56.185281 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185287 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.185292 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.185314 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185327 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.185337 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.185352 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185364 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.185379 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185384 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185392 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.185402 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:46:56.185438 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185448 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185453 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185458 | orchestrator | 2026-02-05 00:46:56.185463 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-05 00:46:56.185468 | orchestrator | Thursday 05 February 2026 00:45:23 +0000 (0:00:02.108) 0:00:39.801 ***** 2026-02-05 00:46:56.185473 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:46:56.185477 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:46:56.185482 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:46:56.185490 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:46:56.185495 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:46:56.185500 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:46:56.185505 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:46:56.185509 | orchestrator | 2026-02-05 00:46:56.185514 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-05 00:46:56.185519 | orchestrator | Thursday 05 February 2026 00:45:26 +0000 (0:00:02.874) 0:00:42.676 ***** 2026-02-05 00:46:56.185524 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:46:56.185529 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:46:56.185534 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:46:56.185542 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:46:56.185549 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:46:56.185557 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:46:56.185564 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:46:56.185572 | orchestrator | 2026-02-05 00:46:56.185579 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-05 00:46:56.185587 | orchestrator | Thursday 05 February 2026 00:45:28 +0000 (0:00:01.955) 0:00:44.632 ***** 2026-02-05 00:46:56.185594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185620 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185647 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185665 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:46:56.185674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185695 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185745 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185760 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185813 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:46:56.185820 | orchestrator | 2026-02-05 00:46:56.185826 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-05 00:46:56.185830 | orchestrator | Thursday 05 February 2026 00:45:31 +0000 (0:00:03.169) 0:00:47.802 ***** 2026-02-05 00:46:56.185835 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:56.185840 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:46:56.185845 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:46:56.185850 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:46:56.185855 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:46:56.185860 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:46:56.185864 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:46:56.185869 | orchestrator | 2026-02-05 00:46:56.185874 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-05 00:46:56.185879 | orchestrator | Thursday 05 February 2026 00:45:33 +0000 (0:00:02.192) 0:00:49.995 ***** 2026-02-05 00:46:56.185884 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:56.185888 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:46:56.185893 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:46:56.185902 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:46:56.185906 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:46:56.185911 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:46:56.185916 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:46:56.185921 | orchestrator | 2026-02-05 00:46:56.185925 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:46:56.185930 | orchestrator | Thursday 05 February 2026 00:45:34 +0000 (0:00:01.406) 0:00:51.401 ***** 2026-02-05 00:46:56.185935 | orchestrator | 2026-02-05 00:46:56.185940 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:46:56.185945 | orchestrator | Thursday 05 February 2026 00:45:35 +0000 (0:00:00.067) 0:00:51.469 ***** 2026-02-05 00:46:56.185949 | orchestrator | 2026-02-05 00:46:56.185954 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:46:56.185959 | orchestrator | Thursday 05 February 2026 00:45:35 +0000 (0:00:00.062) 0:00:51.532 ***** 2026-02-05 00:46:56.185964 | orchestrator | 2026-02-05 00:46:56.185969 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:46:56.185973 | orchestrator | Thursday 05 February 2026 00:45:35 +0000 (0:00:00.059) 0:00:51.591 ***** 2026-02-05 00:46:56.185978 | orchestrator | 2026-02-05 00:46:56.185983 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:46:56.185988 | orchestrator | Thursday 05 February 2026 00:45:35 +0000 (0:00:00.230) 0:00:51.822 ***** 2026-02-05 00:46:56.185992 | orchestrator | 2026-02-05 00:46:56.185997 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:46:56.186002 | orchestrator | Thursday 05 February 2026 00:45:35 +0000 (0:00:00.067) 0:00:51.890 ***** 2026-02-05 00:46:56.186007 | orchestrator | 2026-02-05 00:46:56.186012 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:46:56.186051 | orchestrator | Thursday 05 February 2026 00:45:35 +0000 (0:00:00.068) 0:00:51.959 ***** 2026-02-05 00:46:56.186056 | orchestrator | 2026-02-05 00:46:56.186061 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-05 00:46:56.186066 | orchestrator | Thursday 05 February 2026 00:45:35 +0000 (0:00:00.087) 0:00:52.046 ***** 2026-02-05 00:46:56.186076 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:46:56.186081 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:46:56.186086 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:46:56.186090 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:56.186095 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:46:56.186100 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:46:56.186105 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:46:56.186110 | orchestrator | 2026-02-05 00:46:56.186114 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-05 00:46:56.186119 | orchestrator | Thursday 05 February 2026 00:46:12 +0000 (0:00:36.370) 0:01:28.417 ***** 2026-02-05 00:46:56.186124 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:46:56.186129 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:56.186133 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:46:56.186138 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:46:56.186143 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:46:56.186148 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:46:56.186152 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:46:56.186157 | orchestrator | 2026-02-05 00:46:56.186162 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-05 00:46:56.186167 | orchestrator | Thursday 05 February 2026 00:46:43 +0000 (0:00:31.297) 0:01:59.714 ***** 2026-02-05 00:46:56.186171 | orchestrator | ok: [testbed-manager] 2026-02-05 00:46:56.186177 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:46:56.186182 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:46:56.186187 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:46:56.186191 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:46:56.186196 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:46:56.186205 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:46:56.186209 | orchestrator | 2026-02-05 00:46:56.186214 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-05 00:46:56.186219 | orchestrator | Thursday 05 February 2026 00:46:45 +0000 (0:00:01.811) 0:02:01.526 ***** 2026-02-05 00:46:56.186224 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:56.186229 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:46:56.186234 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:46:56.186238 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:46:56.186243 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:46:56.186248 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:46:56.186253 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:46:56.186257 | orchestrator | 2026-02-05 00:46:56.186262 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:46:56.186267 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:46:56.186276 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:46:56.186281 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:46:56.186286 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:46:56.186291 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:46:56.186295 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:46:56.186300 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:46:56.186305 | orchestrator | 2026-02-05 00:46:56.186310 | orchestrator | 2026-02-05 00:46:56.186315 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:46:56.186319 | orchestrator | Thursday 05 February 2026 00:46:54 +0000 (0:00:09.540) 0:02:11.067 ***** 2026-02-05 00:46:56.186324 | orchestrator | =============================================================================== 2026-02-05 00:46:56.186329 | orchestrator | common : Restart fluentd container ------------------------------------- 36.37s 2026-02-05 00:46:56.186334 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.30s 2026-02-05 00:46:56.186339 | orchestrator | common : Restart cron container ----------------------------------------- 9.54s 2026-02-05 00:46:56.186343 | orchestrator | common : Copying over config.json files for services -------------------- 6.64s 2026-02-05 00:46:56.186348 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.26s 2026-02-05 00:46:56.186353 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.61s 2026-02-05 00:46:56.186357 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.29s 2026-02-05 00:46:56.186362 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.24s 2026-02-05 00:46:56.186367 | orchestrator | common : Check common containers ---------------------------------------- 3.17s 2026-02-05 00:46:56.186372 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.96s 2026-02-05 00:46:56.186376 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.87s 2026-02-05 00:46:56.186381 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.63s 2026-02-05 00:46:56.186458 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.47s 2026-02-05 00:46:56.186470 | orchestrator | common : Creating log volume -------------------------------------------- 2.19s 2026-02-05 00:46:56.186489 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.11s 2026-02-05 00:46:56.186496 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.96s 2026-02-05 00:46:56.186503 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.81s 2026-02-05 00:46:56.186510 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.47s 2026-02-05 00:46:56.186518 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.41s 2026-02-05 00:46:56.186525 | orchestrator | common : include_tasks -------------------------------------------------- 1.27s 2026-02-05 00:46:56.186623 | orchestrator | 2026-02-05 00:46:56 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:56.186634 | orchestrator | 2026-02-05 00:46:56 | INFO  | Task 2eaf87b1-b595-474a-b11d-f8a2170e0873 is in state STARTED 2026-02-05 00:46:56.186639 | orchestrator | 2026-02-05 00:46:56 | INFO  | Task 2770c8c2-4214-41c4-b3bf-5d68149e9f48 is in state STARTED 2026-02-05 00:46:56.186644 | orchestrator | 2026-02-05 00:46:56 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:56.186652 | orchestrator | 2026-02-05 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:59.227132 | orchestrator | 2026-02-05 00:46:59 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:46:59.227224 | orchestrator | 2026-02-05 00:46:59 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:46:59.227576 | orchestrator | 2026-02-05 00:46:59 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:46:59.228350 | orchestrator | 2026-02-05 00:46:59 | INFO  | Task 2eaf87b1-b595-474a-b11d-f8a2170e0873 is in state STARTED 2026-02-05 00:46:59.228872 | orchestrator | 2026-02-05 00:46:59 | INFO  | Task 2770c8c2-4214-41c4-b3bf-5d68149e9f48 is in state STARTED 2026-02-05 00:46:59.229479 | orchestrator | 2026-02-05 00:46:59 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:46:59.229516 | orchestrator | 2026-02-05 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:02.251303 | orchestrator | 2026-02-05 00:47:02 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:02.251483 | orchestrator | 2026-02-05 00:47:02 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:02.252128 | orchestrator | 2026-02-05 00:47:02 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:02.252777 | orchestrator | 2026-02-05 00:47:02 | INFO  | Task 2eaf87b1-b595-474a-b11d-f8a2170e0873 is in state STARTED 2026-02-05 00:47:02.253320 | orchestrator | 2026-02-05 00:47:02 | INFO  | Task 2770c8c2-4214-41c4-b3bf-5d68149e9f48 is in state STARTED 2026-02-05 00:47:02.253917 | orchestrator | 2026-02-05 00:47:02 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:02.253951 | orchestrator | 2026-02-05 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:05.330539 | orchestrator | 2026-02-05 00:47:05 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:05.331939 | orchestrator | 2026-02-05 00:47:05 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:05.332484 | orchestrator | 2026-02-05 00:47:05 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:05.333080 | orchestrator | 2026-02-05 00:47:05 | INFO  | Task 2eaf87b1-b595-474a-b11d-f8a2170e0873 is in state STARTED 2026-02-05 00:47:05.333599 | orchestrator | 2026-02-05 00:47:05 | INFO  | Task 2770c8c2-4214-41c4-b3bf-5d68149e9f48 is in state STARTED 2026-02-05 00:47:05.334326 | orchestrator | 2026-02-05 00:47:05 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:05.334347 | orchestrator | 2026-02-05 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:08.376842 | orchestrator | 2026-02-05 00:47:08 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:08.377210 | orchestrator | 2026-02-05 00:47:08 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:08.377751 | orchestrator | 2026-02-05 00:47:08 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:08.378226 | orchestrator | 2026-02-05 00:47:08 | INFO  | Task 2eaf87b1-b595-474a-b11d-f8a2170e0873 is in state STARTED 2026-02-05 00:47:08.378804 | orchestrator | 2026-02-05 00:47:08 | INFO  | Task 2770c8c2-4214-41c4-b3bf-5d68149e9f48 is in state STARTED 2026-02-05 00:47:08.379486 | orchestrator | 2026-02-05 00:47:08 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:08.379514 | orchestrator | 2026-02-05 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:11.403500 | orchestrator | 2026-02-05 00:47:11 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:11.403981 | orchestrator | 2026-02-05 00:47:11 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:11.404550 | orchestrator | 2026-02-05 00:47:11 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:11.405119 | orchestrator | 2026-02-05 00:47:11 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:11.405490 | orchestrator | 2026-02-05 00:47:11 | INFO  | Task 2eaf87b1-b595-474a-b11d-f8a2170e0873 is in state SUCCESS 2026-02-05 00:47:11.406079 | orchestrator | 2026-02-05 00:47:11 | INFO  | Task 2770c8c2-4214-41c4-b3bf-5d68149e9f48 is in state STARTED 2026-02-05 00:47:11.406819 | orchestrator | 2026-02-05 00:47:11 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:11.406854 | orchestrator | 2026-02-05 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:14.430180 | orchestrator | 2026-02-05 00:47:14 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:14.430239 | orchestrator | 2026-02-05 00:47:14 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:14.430519 | orchestrator | 2026-02-05 00:47:14 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:14.431063 | orchestrator | 2026-02-05 00:47:14 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:14.431477 | orchestrator | 2026-02-05 00:47:14 | INFO  | Task 2770c8c2-4214-41c4-b3bf-5d68149e9f48 is in state STARTED 2026-02-05 00:47:14.432007 | orchestrator | 2026-02-05 00:47:14 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:14.432030 | orchestrator | 2026-02-05 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:17.475972 | orchestrator | 2026-02-05 00:47:17 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:17.479364 | orchestrator | 2026-02-05 00:47:17 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:17.479737 | orchestrator | 2026-02-05 00:47:17 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:17.480352 | orchestrator | 2026-02-05 00:47:17 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:17.483173 | orchestrator | 2026-02-05 00:47:17 | INFO  | Task 2770c8c2-4214-41c4-b3bf-5d68149e9f48 is in state SUCCESS 2026-02-05 00:47:17.483905 | orchestrator | 2026-02-05 00:47:17.483929 | orchestrator | 2026-02-05 00:47:17.483937 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:47:17.483945 | orchestrator | 2026-02-05 00:47:17.483951 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:47:17.483959 | orchestrator | Thursday 05 February 2026 00:47:00 +0000 (0:00:00.379) 0:00:00.379 ***** 2026-02-05 00:47:17.483966 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:47:17.483974 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:47:17.483980 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:47:17.483986 | orchestrator | 2026-02-05 00:47:17.483994 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:47:17.484002 | orchestrator | Thursday 05 February 2026 00:47:00 +0000 (0:00:00.437) 0:00:00.817 ***** 2026-02-05 00:47:17.484010 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-05 00:47:17.484017 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-05 00:47:17.484023 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-05 00:47:17.484030 | orchestrator | 2026-02-05 00:47:17.484037 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-05 00:47:17.484043 | orchestrator | 2026-02-05 00:47:17.484050 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-05 00:47:17.484057 | orchestrator | Thursday 05 February 2026 00:47:01 +0000 (0:00:00.644) 0:00:01.461 ***** 2026-02-05 00:47:17.484063 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:47:17.484070 | orchestrator | 2026-02-05 00:47:17.484076 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-05 00:47:17.484082 | orchestrator | Thursday 05 February 2026 00:47:01 +0000 (0:00:00.570) 0:00:02.031 ***** 2026-02-05 00:47:17.484089 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-05 00:47:17.484096 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-05 00:47:17.484102 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-05 00:47:17.484109 | orchestrator | 2026-02-05 00:47:17.484115 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-05 00:47:17.484121 | orchestrator | Thursday 05 February 2026 00:47:02 +0000 (0:00:00.735) 0:00:02.767 ***** 2026-02-05 00:47:17.484128 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-05 00:47:17.484134 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-05 00:47:17.484140 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-05 00:47:17.484146 | orchestrator | 2026-02-05 00:47:17.484152 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-05 00:47:17.484159 | orchestrator | Thursday 05 February 2026 00:47:03 +0000 (0:00:01.512) 0:00:04.280 ***** 2026-02-05 00:47:17.484165 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:47:17.484171 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:47:17.484177 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:47:17.484184 | orchestrator | 2026-02-05 00:47:17.484189 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-05 00:47:17.484196 | orchestrator | Thursday 05 February 2026 00:47:05 +0000 (0:00:01.635) 0:00:05.915 ***** 2026-02-05 00:47:17.484203 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:47:17.484209 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:47:17.484216 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:47:17.484222 | orchestrator | 2026-02-05 00:47:17.484228 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:47:17.484234 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:17.484259 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:17.484264 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:17.484267 | orchestrator | 2026-02-05 00:47:17.484271 | orchestrator | 2026-02-05 00:47:17.484275 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:47:17.484279 | orchestrator | Thursday 05 February 2026 00:47:08 +0000 (0:00:03.187) 0:00:09.103 ***** 2026-02-05 00:47:17.484283 | orchestrator | =============================================================================== 2026-02-05 00:47:17.484287 | orchestrator | memcached : Restart memcached container --------------------------------- 3.19s 2026-02-05 00:47:17.484291 | orchestrator | memcached : Check memcached container ----------------------------------- 1.64s 2026-02-05 00:47:17.484295 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.51s 2026-02-05 00:47:17.484299 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.74s 2026-02-05 00:47:17.484303 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-02-05 00:47:17.484307 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.57s 2026-02-05 00:47:17.484311 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2026-02-05 00:47:17.484314 | orchestrator | 2026-02-05 00:47:17.484318 | orchestrator | 2026-02-05 00:47:17.484322 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:47:17.484326 | orchestrator | 2026-02-05 00:47:17.484330 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:47:17.484333 | orchestrator | Thursday 05 February 2026 00:47:00 +0000 (0:00:00.231) 0:00:00.231 ***** 2026-02-05 00:47:17.484337 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:47:17.484341 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:47:17.484345 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:47:17.484348 | orchestrator | 2026-02-05 00:47:17.484352 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:47:17.484368 | orchestrator | Thursday 05 February 2026 00:47:00 +0000 (0:00:00.335) 0:00:00.566 ***** 2026-02-05 00:47:17.484374 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-05 00:47:17.484381 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-05 00:47:17.484387 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-05 00:47:17.484393 | orchestrator | 2026-02-05 00:47:17.484400 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-05 00:47:17.484425 | orchestrator | 2026-02-05 00:47:17.484432 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-05 00:47:17.484438 | orchestrator | Thursday 05 February 2026 00:47:01 +0000 (0:00:00.699) 0:00:01.265 ***** 2026-02-05 00:47:17.484444 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:47:17.484450 | orchestrator | 2026-02-05 00:47:17.484456 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-05 00:47:17.484479 | orchestrator | Thursday 05 February 2026 00:47:01 +0000 (0:00:00.448) 0:00:01.714 ***** 2026-02-05 00:47:17.484488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484551 | orchestrator | 2026-02-05 00:47:17.484558 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-05 00:47:17.484565 | orchestrator | Thursday 05 February 2026 00:47:02 +0000 (0:00:01.265) 0:00:02.979 ***** 2026-02-05 00:47:17.484572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484628 | orchestrator | 2026-02-05 00:47:17.484635 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-05 00:47:17.484642 | orchestrator | Thursday 05 February 2026 00:47:04 +0000 (0:00:02.177) 0:00:05.157 ***** 2026-02-05 00:47:17.484650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484706 | orchestrator | 2026-02-05 00:47:17.484712 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-05 00:47:17.484719 | orchestrator | Thursday 05 February 2026 00:47:07 +0000 (0:00:02.458) 0:00:07.616 ***** 2026-02-05 00:47:17.484726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:47:17.484777 | orchestrator | 2026-02-05 00:47:17.484784 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-05 00:47:17.484790 | orchestrator | Thursday 05 February 2026 00:47:09 +0000 (0:00:01.608) 0:00:09.225 ***** 2026-02-05 00:47:17.484801 | orchestrator | 2026-02-05 00:47:17.484807 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-05 00:47:17.484813 | orchestrator | Thursday 05 February 2026 00:47:09 +0000 (0:00:00.071) 0:00:09.297 ***** 2026-02-05 00:47:17.484819 | orchestrator | 2026-02-05 00:47:17.484825 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-05 00:47:17.484831 | orchestrator | Thursday 05 February 2026 00:47:09 +0000 (0:00:00.092) 0:00:09.389 ***** 2026-02-05 00:47:17.484837 | orchestrator | 2026-02-05 00:47:17.484843 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-05 00:47:17.484849 | orchestrator | Thursday 05 February 2026 00:47:09 +0000 (0:00:00.075) 0:00:09.465 ***** 2026-02-05 00:47:17.484855 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:47:17.484862 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:47:17.484867 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:47:17.484874 | orchestrator | 2026-02-05 00:47:17.484880 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-05 00:47:17.484886 | orchestrator | Thursday 05 February 2026 00:47:12 +0000 (0:00:03.306) 0:00:12.771 ***** 2026-02-05 00:47:17.484892 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:47:17.484898 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:47:17.484905 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:47:17.484911 | orchestrator | 2026-02-05 00:47:17.484917 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:47:17.484923 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:17.484929 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:17.484936 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:17.484942 | orchestrator | 2026-02-05 00:47:17.484948 | orchestrator | 2026-02-05 00:47:17.484954 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:47:17.484960 | orchestrator | Thursday 05 February 2026 00:47:16 +0000 (0:00:04.225) 0:00:16.997 ***** 2026-02-05 00:47:17.484966 | orchestrator | =============================================================================== 2026-02-05 00:47:17.484973 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.23s 2026-02-05 00:47:17.484980 | orchestrator | redis : Restart redis container ----------------------------------------- 3.31s 2026-02-05 00:47:17.484984 | orchestrator | redis : Copying over redis config files --------------------------------- 2.46s 2026-02-05 00:47:17.484987 | orchestrator | redis : Copying over default config.json files -------------------------- 2.18s 2026-02-05 00:47:17.484991 | orchestrator | redis : Check redis containers ------------------------------------------ 1.61s 2026-02-05 00:47:17.484996 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.27s 2026-02-05 00:47:17.485002 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2026-02-05 00:47:17.485008 | orchestrator | redis : include_tasks --------------------------------------------------- 0.45s 2026-02-05 00:47:17.485014 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-02-05 00:47:17.485020 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2026-02-05 00:47:17.485027 | orchestrator | 2026-02-05 00:47:17 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:17.485033 | orchestrator | 2026-02-05 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:20.516227 | orchestrator | 2026-02-05 00:47:20 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:20.517496 | orchestrator | 2026-02-05 00:47:20 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:20.517661 | orchestrator | 2026-02-05 00:47:20 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:20.517670 | orchestrator | 2026-02-05 00:47:20 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:20.517680 | orchestrator | 2026-02-05 00:47:20 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:20.517685 | orchestrator | 2026-02-05 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:23.543662 | orchestrator | 2026-02-05 00:47:23 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:23.543859 | orchestrator | 2026-02-05 00:47:23 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:23.546214 | orchestrator | 2026-02-05 00:47:23 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:23.546591 | orchestrator | 2026-02-05 00:47:23 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:23.547466 | orchestrator | 2026-02-05 00:47:23 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:23.547506 | orchestrator | 2026-02-05 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:26.592323 | orchestrator | 2026-02-05 00:47:26 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:26.599524 | orchestrator | 2026-02-05 00:47:26 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:26.599606 | orchestrator | 2026-02-05 00:47:26 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:26.599615 | orchestrator | 2026-02-05 00:47:26 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:26.599622 | orchestrator | 2026-02-05 00:47:26 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:26.599629 | orchestrator | 2026-02-05 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:29.639669 | orchestrator | 2026-02-05 00:47:29 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:29.639753 | orchestrator | 2026-02-05 00:47:29 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:29.639760 | orchestrator | 2026-02-05 00:47:29 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:29.639768 | orchestrator | 2026-02-05 00:47:29 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:29.639774 | orchestrator | 2026-02-05 00:47:29 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:29.639781 | orchestrator | 2026-02-05 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:32.679541 | orchestrator | 2026-02-05 00:47:32 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:32.679612 | orchestrator | 2026-02-05 00:47:32 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:32.679618 | orchestrator | 2026-02-05 00:47:32 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:32.679622 | orchestrator | 2026-02-05 00:47:32 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:32.679626 | orchestrator | 2026-02-05 00:47:32 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:32.679643 | orchestrator | 2026-02-05 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:35.711267 | orchestrator | 2026-02-05 00:47:35 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:35.712643 | orchestrator | 2026-02-05 00:47:35 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:35.713289 | orchestrator | 2026-02-05 00:47:35 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:35.715458 | orchestrator | 2026-02-05 00:47:35 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:35.716891 | orchestrator | 2026-02-05 00:47:35 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:35.716926 | orchestrator | 2026-02-05 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:38.751046 | orchestrator | 2026-02-05 00:47:38 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:38.754838 | orchestrator | 2026-02-05 00:47:38 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:38.755483 | orchestrator | 2026-02-05 00:47:38 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:38.756286 | orchestrator | 2026-02-05 00:47:38 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:38.758493 | orchestrator | 2026-02-05 00:47:38 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:38.760332 | orchestrator | 2026-02-05 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:41.809570 | orchestrator | 2026-02-05 00:47:41 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:41.810194 | orchestrator | 2026-02-05 00:47:41 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:41.810935 | orchestrator | 2026-02-05 00:47:41 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:41.811889 | orchestrator | 2026-02-05 00:47:41 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:41.812620 | orchestrator | 2026-02-05 00:47:41 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:41.812849 | orchestrator | 2026-02-05 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:44.862672 | orchestrator | 2026-02-05 00:47:44 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:44.862728 | orchestrator | 2026-02-05 00:47:44 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:44.862746 | orchestrator | 2026-02-05 00:47:44 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:44.862758 | orchestrator | 2026-02-05 00:47:44 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:44.862765 | orchestrator | 2026-02-05 00:47:44 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:44.862771 | orchestrator | 2026-02-05 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:47.890857 | orchestrator | 2026-02-05 00:47:47 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:47.891125 | orchestrator | 2026-02-05 00:47:47 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:47.891900 | orchestrator | 2026-02-05 00:47:47 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:47.892373 | orchestrator | 2026-02-05 00:47:47 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:47.893116 | orchestrator | 2026-02-05 00:47:47 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:47.893183 | orchestrator | 2026-02-05 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:50.957213 | orchestrator | 2026-02-05 00:47:50 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:50.957395 | orchestrator | 2026-02-05 00:47:50 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:50.960268 | orchestrator | 2026-02-05 00:47:50 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:50.960803 | orchestrator | 2026-02-05 00:47:50 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:50.963267 | orchestrator | 2026-02-05 00:47:50 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:50.963333 | orchestrator | 2026-02-05 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:54.020047 | orchestrator | 2026-02-05 00:47:54 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:54.020655 | orchestrator | 2026-02-05 00:47:54 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:54.021382 | orchestrator | 2026-02-05 00:47:54 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:54.022283 | orchestrator | 2026-02-05 00:47:54 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:54.022913 | orchestrator | 2026-02-05 00:47:54 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:54.023046 | orchestrator | 2026-02-05 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:57.070528 | orchestrator | 2026-02-05 00:47:57 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:47:57.070607 | orchestrator | 2026-02-05 00:47:57 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:47:57.070618 | orchestrator | 2026-02-05 00:47:57 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:47:57.070625 | orchestrator | 2026-02-05 00:47:57 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:47:57.070631 | orchestrator | 2026-02-05 00:47:57 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:47:57.070639 | orchestrator | 2026-02-05 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:00.089031 | orchestrator | 2026-02-05 00:48:00 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:00.089245 | orchestrator | 2026-02-05 00:48:00 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:48:00.090360 | orchestrator | 2026-02-05 00:48:00 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:00.091010 | orchestrator | 2026-02-05 00:48:00 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:00.091806 | orchestrator | 2026-02-05 00:48:00 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:00.091852 | orchestrator | 2026-02-05 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:03.130882 | orchestrator | 2026-02-05 00:48:03 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:03.132141 | orchestrator | 2026-02-05 00:48:03 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state STARTED 2026-02-05 00:48:03.132301 | orchestrator | 2026-02-05 00:48:03 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:03.135748 | orchestrator | 2026-02-05 00:48:03 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:03.135837 | orchestrator | 2026-02-05 00:48:03 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:03.135850 | orchestrator | 2026-02-05 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:06.182554 | orchestrator | 2026-02-05 00:48:06 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:06.184718 | orchestrator | 2026-02-05 00:48:06 | INFO  | Task c2424e2f-b80d-48ae-ab97-5aa38821a7cc is in state SUCCESS 2026-02-05 00:48:06.185689 | orchestrator | 2026-02-05 00:48:06.185723 | orchestrator | 2026-02-05 00:48:06.185728 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:48:06.185732 | orchestrator | 2026-02-05 00:48:06.185735 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:48:06.185739 | orchestrator | Thursday 05 February 2026 00:46:59 +0000 (0:00:00.221) 0:00:00.221 ***** 2026-02-05 00:48:06.185742 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:48:06.185746 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:48:06.185750 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:48:06.185753 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:48:06.185756 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:48:06.185759 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:48:06.185762 | orchestrator | 2026-02-05 00:48:06.185766 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:48:06.185769 | orchestrator | Thursday 05 February 2026 00:47:00 +0000 (0:00:00.745) 0:00:00.967 ***** 2026-02-05 00:48:06.185772 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 00:48:06.185775 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 00:48:06.185779 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 00:48:06.185782 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 00:48:06.185785 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 00:48:06.185788 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 00:48:06.185791 | orchestrator | 2026-02-05 00:48:06.185795 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-05 00:48:06.185798 | orchestrator | 2026-02-05 00:48:06.185801 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-05 00:48:06.185804 | orchestrator | Thursday 05 February 2026 00:47:00 +0000 (0:00:00.625) 0:00:01.592 ***** 2026-02-05 00:48:06.185808 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:48:06.185811 | orchestrator | 2026-02-05 00:48:06.185815 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-05 00:48:06.185818 | orchestrator | Thursday 05 February 2026 00:47:02 +0000 (0:00:01.280) 0:00:02.872 ***** 2026-02-05 00:48:06.185821 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-05 00:48:06.185824 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-05 00:48:06.185828 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-05 00:48:06.185839 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-05 00:48:06.185842 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-05 00:48:06.185846 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-05 00:48:06.185852 | orchestrator | 2026-02-05 00:48:06.185861 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-05 00:48:06.185866 | orchestrator | Thursday 05 February 2026 00:47:03 +0000 (0:00:01.129) 0:00:04.001 ***** 2026-02-05 00:48:06.185884 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-05 00:48:06.185889 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-05 00:48:06.185894 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-05 00:48:06.185899 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-05 00:48:06.185904 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-05 00:48:06.185909 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-05 00:48:06.185914 | orchestrator | 2026-02-05 00:48:06.185919 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-05 00:48:06.185924 | orchestrator | Thursday 05 February 2026 00:47:04 +0000 (0:00:01.441) 0:00:05.442 ***** 2026-02-05 00:48:06.185929 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-05 00:48:06.185935 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:48:06.185941 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-05 00:48:06.185947 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:48:06.185952 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-05 00:48:06.185957 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:48:06.185962 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-05 00:48:06.185967 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:48:06.185972 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-05 00:48:06.185975 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:48:06.185980 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-05 00:48:06.185985 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:48:06.185990 | orchestrator | 2026-02-05 00:48:06.185995 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-05 00:48:06.186001 | orchestrator | Thursday 05 February 2026 00:47:05 +0000 (0:00:01.126) 0:00:06.569 ***** 2026-02-05 00:48:06.186006 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:48:06.186038 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:48:06.186043 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:48:06.186049 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:48:06.186056 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:48:06.186062 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:48:06.186067 | orchestrator | 2026-02-05 00:48:06.186072 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-05 00:48:06.186077 | orchestrator | Thursday 05 February 2026 00:47:06 +0000 (0:00:00.678) 0:00:07.247 ***** 2026-02-05 00:48:06.186093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186114 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186139 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186144 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186153 | orchestrator | 2026-02-05 00:48:06.186156 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-05 00:48:06.186160 | orchestrator | Thursday 05 February 2026 00:47:08 +0000 (0:00:01.635) 0:00:08.882 ***** 2026-02-05 00:48:06.186163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186177 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186204 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186210 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186219 | orchestrator | 2026-02-05 00:48:06.186222 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-05 00:48:06.186225 | orchestrator | Thursday 05 February 2026 00:47:11 +0000 (0:00:03.260) 0:00:12.142 ***** 2026-02-05 00:48:06.186228 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:48:06.186232 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:48:06.186235 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:48:06.186238 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:48:06.186241 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:48:06.186244 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:48:06.186247 | orchestrator | 2026-02-05 00:48:06.186250 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-05 00:48:06.186253 | orchestrator | Thursday 05 February 2026 00:47:12 +0000 (0:00:01.143) 0:00:13.285 ***** 2026-02-05 00:48:06.186258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186285 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186288 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186310 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:48:06.186314 | orchestrator | 2026-02-05 00:48:06.186317 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 00:48:06.186323 | orchestrator | Thursday 05 February 2026 00:47:15 +0000 (0:00:02.726) 0:00:16.012 ***** 2026-02-05 00:48:06.186326 | orchestrator | 2026-02-05 00:48:06.186330 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 00:48:06.186334 | orchestrator | Thursday 05 February 2026 00:47:15 +0000 (0:00:00.472) 0:00:16.484 ***** 2026-02-05 00:48:06.186337 | orchestrator | 2026-02-05 00:48:06.186341 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 00:48:06.186344 | orchestrator | Thursday 05 February 2026 00:47:15 +0000 (0:00:00.145) 0:00:16.629 ***** 2026-02-05 00:48:06.186348 | orchestrator | 2026-02-05 00:48:06.186352 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 00:48:06.186356 | orchestrator | Thursday 05 February 2026 00:47:15 +0000 (0:00:00.176) 0:00:16.805 ***** 2026-02-05 00:48:06.186359 | orchestrator | 2026-02-05 00:48:06.186363 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 00:48:06.186367 | orchestrator | Thursday 05 February 2026 00:47:16 +0000 (0:00:00.300) 0:00:17.106 ***** 2026-02-05 00:48:06.186371 | orchestrator | 2026-02-05 00:48:06.186374 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 00:48:06.186378 | orchestrator | Thursday 05 February 2026 00:47:16 +0000 (0:00:00.332) 0:00:17.439 ***** 2026-02-05 00:48:06.186382 | orchestrator | 2026-02-05 00:48:06.186385 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-05 00:48:06.186389 | orchestrator | Thursday 05 February 2026 00:47:16 +0000 (0:00:00.219) 0:00:17.659 ***** 2026-02-05 00:48:06.186407 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:06.186411 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:06.186414 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:48:06.186418 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:48:06.186422 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:06.186428 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:48:06.186432 | orchestrator | 2026-02-05 00:48:06.186436 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-05 00:48:06.186440 | orchestrator | Thursday 05 February 2026 00:47:27 +0000 (0:00:10.484) 0:00:28.144 ***** 2026-02-05 00:48:06.186443 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:48:06.186447 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:48:06.186451 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:48:06.186454 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:48:06.186458 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:48:06.186461 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:48:06.186465 | orchestrator | 2026-02-05 00:48:06.186469 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-05 00:48:06.186473 | orchestrator | Thursday 05 February 2026 00:47:28 +0000 (0:00:01.399) 0:00:29.543 ***** 2026-02-05 00:48:06.186476 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:06.186480 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:48:06.186484 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:06.186487 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:48:06.186491 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:48:06.186495 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:06.186498 | orchestrator | 2026-02-05 00:48:06.186503 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-05 00:48:06.186506 | orchestrator | Thursday 05 February 2026 00:47:39 +0000 (0:00:11.186) 0:00:40.729 ***** 2026-02-05 00:48:06.186512 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-05 00:48:06.186516 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-05 00:48:06.186520 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-05 00:48:06.186524 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-05 00:48:06.186528 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-05 00:48:06.186531 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-05 00:48:06.186535 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-05 00:48:06.186539 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-05 00:48:06.186542 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-05 00:48:06.186546 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-05 00:48:06.186550 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-05 00:48:06.186553 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-05 00:48:06.186557 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 00:48:06.186561 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 00:48:06.186564 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 00:48:06.186568 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 00:48:06.186573 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 00:48:06.186581 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 00:48:06.186584 | orchestrator | 2026-02-05 00:48:06.186588 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-05 00:48:06.186592 | orchestrator | Thursday 05 February 2026 00:47:47 +0000 (0:00:07.613) 0:00:48.343 ***** 2026-02-05 00:48:06.186595 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-05 00:48:06.186599 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:48:06.186602 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-05 00:48:06.186606 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:48:06.186610 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-05 00:48:06.186685 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:48:06.186693 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-05 00:48:06.186701 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-05 00:48:06.186708 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-05 00:48:06.186713 | orchestrator | 2026-02-05 00:48:06.186718 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-05 00:48:06.186723 | orchestrator | Thursday 05 February 2026 00:47:50 +0000 (0:00:03.185) 0:00:51.529 ***** 2026-02-05 00:48:06.186728 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-05 00:48:06.186733 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-05 00:48:06.186738 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:48:06.186742 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:48:06.186747 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-05 00:48:06.186752 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:48:06.186757 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-05 00:48:06.186762 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-05 00:48:06.186766 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-05 00:48:06.186770 | orchestrator | 2026-02-05 00:48:06.186773 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-05 00:48:06.186776 | orchestrator | Thursday 05 February 2026 00:47:54 +0000 (0:00:04.250) 0:00:55.779 ***** 2026-02-05 00:48:06.186779 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:06.186785 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:06.186789 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:48:06.186794 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:06.186800 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:48:06.186805 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:48:06.186811 | orchestrator | 2026-02-05 00:48:06.186815 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:48:06.186818 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:48:06.186825 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:48:06.186829 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:48:06.186832 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 00:48:06.186835 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 00:48:06.186839 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 00:48:06.186848 | orchestrator | 2026-02-05 00:48:06.186853 | orchestrator | 2026-02-05 00:48:06.186858 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:48:06.186863 | orchestrator | Thursday 05 February 2026 00:48:03 +0000 (0:00:08.800) 0:01:04.581 ***** 2026-02-05 00:48:06.186868 | orchestrator | =============================================================================== 2026-02-05 00:48:06.186874 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.99s 2026-02-05 00:48:06.186879 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.48s 2026-02-05 00:48:06.186884 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.61s 2026-02-05 00:48:06.186890 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.25s 2026-02-05 00:48:06.186895 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.26s 2026-02-05 00:48:06.186900 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.19s 2026-02-05 00:48:06.186907 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.73s 2026-02-05 00:48:06.186914 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.65s 2026-02-05 00:48:06.186919 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.64s 2026-02-05 00:48:06.186924 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.44s 2026-02-05 00:48:06.186929 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.40s 2026-02-05 00:48:06.186938 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.28s 2026-02-05 00:48:06.186943 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.14s 2026-02-05 00:48:06.186948 | orchestrator | module-load : Load modules ---------------------------------------------- 1.13s 2026-02-05 00:48:06.186953 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.13s 2026-02-05 00:48:06.186958 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.75s 2026-02-05 00:48:06.186963 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.68s 2026-02-05 00:48:06.186969 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-02-05 00:48:06.186974 | orchestrator | 2026-02-05 00:48:06 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:06.188739 | orchestrator | 2026-02-05 00:48:06 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:06.190329 | orchestrator | 2026-02-05 00:48:06 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:06.191910 | orchestrator | 2026-02-05 00:48:06 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:06.191952 | orchestrator | 2026-02-05 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:09.226693 | orchestrator | 2026-02-05 00:48:09 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:09.227183 | orchestrator | 2026-02-05 00:48:09 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:09.227979 | orchestrator | 2026-02-05 00:48:09 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:09.228802 | orchestrator | 2026-02-05 00:48:09 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:09.229640 | orchestrator | 2026-02-05 00:48:09 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:09.229664 | orchestrator | 2026-02-05 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:12.264649 | orchestrator | 2026-02-05 00:48:12 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:12.265121 | orchestrator | 2026-02-05 00:48:12 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:12.266063 | orchestrator | 2026-02-05 00:48:12 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:12.266787 | orchestrator | 2026-02-05 00:48:12 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:12.267545 | orchestrator | 2026-02-05 00:48:12 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:12.267558 | orchestrator | 2026-02-05 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:15.296527 | orchestrator | 2026-02-05 00:48:15 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:15.297085 | orchestrator | 2026-02-05 00:48:15 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:15.297782 | orchestrator | 2026-02-05 00:48:15 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:15.298591 | orchestrator | 2026-02-05 00:48:15 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:15.299517 | orchestrator | 2026-02-05 00:48:15 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:15.299541 | orchestrator | 2026-02-05 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:18.347643 | orchestrator | 2026-02-05 00:48:18 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:18.347706 | orchestrator | 2026-02-05 00:48:18 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:18.347716 | orchestrator | 2026-02-05 00:48:18 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:18.347724 | orchestrator | 2026-02-05 00:48:18 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:18.347731 | orchestrator | 2026-02-05 00:48:18 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:18.347739 | orchestrator | 2026-02-05 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:21.375537 | orchestrator | 2026-02-05 00:48:21 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:21.375591 | orchestrator | 2026-02-05 00:48:21 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:21.375613 | orchestrator | 2026-02-05 00:48:21 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:21.375617 | orchestrator | 2026-02-05 00:48:21 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:21.376315 | orchestrator | 2026-02-05 00:48:21 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:21.376345 | orchestrator | 2026-02-05 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:24.418650 | orchestrator | 2026-02-05 00:48:24 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:24.419853 | orchestrator | 2026-02-05 00:48:24 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:24.419897 | orchestrator | 2026-02-05 00:48:24 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:24.419988 | orchestrator | 2026-02-05 00:48:24 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:24.420740 | orchestrator | 2026-02-05 00:48:24 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:24.420869 | orchestrator | 2026-02-05 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:27.477674 | orchestrator | 2026-02-05 00:48:27 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:27.478448 | orchestrator | 2026-02-05 00:48:27 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:27.479237 | orchestrator | 2026-02-05 00:48:27 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:27.480146 | orchestrator | 2026-02-05 00:48:27 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:27.480866 | orchestrator | 2026-02-05 00:48:27 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:27.481178 | orchestrator | 2026-02-05 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:30.526182 | orchestrator | 2026-02-05 00:48:30 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:30.527086 | orchestrator | 2026-02-05 00:48:30 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:30.528334 | orchestrator | 2026-02-05 00:48:30 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:30.529233 | orchestrator | 2026-02-05 00:48:30 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:30.530316 | orchestrator | 2026-02-05 00:48:30 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:30.530335 | orchestrator | 2026-02-05 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:33.560878 | orchestrator | 2026-02-05 00:48:33 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:33.561711 | orchestrator | 2026-02-05 00:48:33 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:33.563439 | orchestrator | 2026-02-05 00:48:33 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:33.565579 | orchestrator | 2026-02-05 00:48:33 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:33.567212 | orchestrator | 2026-02-05 00:48:33 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:33.567237 | orchestrator | 2026-02-05 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:36.603174 | orchestrator | 2026-02-05 00:48:36 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:36.605818 | orchestrator | 2026-02-05 00:48:36 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:36.608443 | orchestrator | 2026-02-05 00:48:36 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:36.610852 | orchestrator | 2026-02-05 00:48:36 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:36.613696 | orchestrator | 2026-02-05 00:48:36 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:36.613803 | orchestrator | 2026-02-05 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:39.654983 | orchestrator | 2026-02-05 00:48:39 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:39.656112 | orchestrator | 2026-02-05 00:48:39 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:39.656813 | orchestrator | 2026-02-05 00:48:39 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:39.657565 | orchestrator | 2026-02-05 00:48:39 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:39.658349 | orchestrator | 2026-02-05 00:48:39 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:39.658444 | orchestrator | 2026-02-05 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:42.705136 | orchestrator | 2026-02-05 00:48:42 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:42.707542 | orchestrator | 2026-02-05 00:48:42 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:42.710733 | orchestrator | 2026-02-05 00:48:42 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:42.713943 | orchestrator | 2026-02-05 00:48:42 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:42.716200 | orchestrator | 2026-02-05 00:48:42 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:42.716249 | orchestrator | 2026-02-05 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:45.763040 | orchestrator | 2026-02-05 00:48:45 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:45.763092 | orchestrator | 2026-02-05 00:48:45 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:45.763100 | orchestrator | 2026-02-05 00:48:45 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:45.763106 | orchestrator | 2026-02-05 00:48:45 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:45.763903 | orchestrator | 2026-02-05 00:48:45 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:45.764124 | orchestrator | 2026-02-05 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:48.809982 | orchestrator | 2026-02-05 00:48:48 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:48.810844 | orchestrator | 2026-02-05 00:48:48 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:48.813301 | orchestrator | 2026-02-05 00:48:48 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:48.815881 | orchestrator | 2026-02-05 00:48:48 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:48.816938 | orchestrator | 2026-02-05 00:48:48 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:48.816978 | orchestrator | 2026-02-05 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:51.855646 | orchestrator | 2026-02-05 00:48:51 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:51.856087 | orchestrator | 2026-02-05 00:48:51 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:51.856713 | orchestrator | 2026-02-05 00:48:51 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:51.857181 | orchestrator | 2026-02-05 00:48:51 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:51.859625 | orchestrator | 2026-02-05 00:48:51 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:51.859676 | orchestrator | 2026-02-05 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:54.924202 | orchestrator | 2026-02-05 00:48:54 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:54.926551 | orchestrator | 2026-02-05 00:48:54 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:54.928537 | orchestrator | 2026-02-05 00:48:54 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:54.929995 | orchestrator | 2026-02-05 00:48:54 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:54.931573 | orchestrator | 2026-02-05 00:48:54 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:54.931747 | orchestrator | 2026-02-05 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:57.978925 | orchestrator | 2026-02-05 00:48:57 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:48:57.978997 | orchestrator | 2026-02-05 00:48:57 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:48:57.979027 | orchestrator | 2026-02-05 00:48:57 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:48:57.979033 | orchestrator | 2026-02-05 00:48:57 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:48:57.980668 | orchestrator | 2026-02-05 00:48:57 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:48:57.980719 | orchestrator | 2026-02-05 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:01.017545 | orchestrator | 2026-02-05 00:49:01 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:01.023961 | orchestrator | 2026-02-05 00:49:01 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:49:01.024293 | orchestrator | 2026-02-05 00:49:01 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:01.025176 | orchestrator | 2026-02-05 00:49:01 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:49:01.025883 | orchestrator | 2026-02-05 00:49:01 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:01.025914 | orchestrator | 2026-02-05 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:04.117704 | orchestrator | 2026-02-05 00:49:04 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:04.117923 | orchestrator | 2026-02-05 00:49:04 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:49:04.118571 | orchestrator | 2026-02-05 00:49:04 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:04.119050 | orchestrator | 2026-02-05 00:49:04 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:49:04.119505 | orchestrator | 2026-02-05 00:49:04 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:04.119583 | orchestrator | 2026-02-05 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:07.164250 | orchestrator | 2026-02-05 00:49:07 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:07.165169 | orchestrator | 2026-02-05 00:49:07 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:49:07.170402 | orchestrator | 2026-02-05 00:49:07 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:07.268765 | orchestrator | 2026-02-05 00:49:07 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:49:07.268859 | orchestrator | 2026-02-05 00:49:07 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:07.268875 | orchestrator | 2026-02-05 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:10.241212 | orchestrator | 2026-02-05 00:49:10 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:10.242607 | orchestrator | 2026-02-05 00:49:10 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state STARTED 2026-02-05 00:49:10.243820 | orchestrator | 2026-02-05 00:49:10 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:10.244919 | orchestrator | 2026-02-05 00:49:10 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:49:10.246310 | orchestrator | 2026-02-05 00:49:10 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:10.246348 | orchestrator | 2026-02-05 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:13.279440 | orchestrator | 2026-02-05 00:49:13 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:13.281670 | orchestrator | 2026-02-05 00:49:13 | INFO  | Task a65fa9d5-fd0e-40a5-9a91-b22f2a7d5480 is in state SUCCESS 2026-02-05 00:49:13.282530 | orchestrator | 2026-02-05 00:49:13.282582 | orchestrator | 2026-02-05 00:49:13.282594 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-05 00:49:13.282604 | orchestrator | 2026-02-05 00:49:13.282614 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-05 00:49:13.282623 | orchestrator | Thursday 05 February 2026 00:44:44 +0000 (0:00:00.149) 0:00:00.149 ***** 2026-02-05 00:49:13.282632 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:49:13.282642 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:49:13.282650 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:49:13.282658 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.282666 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.282675 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.282683 | orchestrator | 2026-02-05 00:49:13.282692 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-05 00:49:13.282701 | orchestrator | Thursday 05 February 2026 00:44:45 +0000 (0:00:00.766) 0:00:00.916 ***** 2026-02-05 00:49:13.282710 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.282720 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.282729 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.282738 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.282748 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.282756 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.282764 | orchestrator | 2026-02-05 00:49:13.282773 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-05 00:49:13.282783 | orchestrator | Thursday 05 February 2026 00:44:46 +0000 (0:00:00.752) 0:00:01.668 ***** 2026-02-05 00:49:13.282793 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.282802 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.282811 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.282820 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.282829 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.282839 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.282848 | orchestrator | 2026-02-05 00:49:13.282857 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-05 00:49:13.282867 | orchestrator | Thursday 05 February 2026 00:44:47 +0000 (0:00:00.831) 0:00:02.500 ***** 2026-02-05 00:49:13.282876 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:49:13.282885 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.282895 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.282904 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:49:13.282914 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.282923 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:49:13.282933 | orchestrator | 2026-02-05 00:49:13.282943 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-05 00:49:13.282952 | orchestrator | Thursday 05 February 2026 00:44:48 +0000 (0:00:01.877) 0:00:04.377 ***** 2026-02-05 00:49:13.282982 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:49:13.282992 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:49:13.283001 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:49:13.283010 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.283019 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.283028 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.283036 | orchestrator | 2026-02-05 00:49:13.283045 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-05 00:49:13.283054 | orchestrator | Thursday 05 February 2026 00:44:51 +0000 (0:00:02.084) 0:00:06.461 ***** 2026-02-05 00:49:13.283063 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:49:13.283072 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:49:13.283079 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:49:13.283088 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.283096 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.283105 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.283113 | orchestrator | 2026-02-05 00:49:13.283122 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-05 00:49:13.283130 | orchestrator | Thursday 05 February 2026 00:44:52 +0000 (0:00:01.262) 0:00:07.724 ***** 2026-02-05 00:49:13.283140 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.283150 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.283160 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.283169 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.283178 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.283187 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.283196 | orchestrator | 2026-02-05 00:49:13.283204 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-05 00:49:13.283213 | orchestrator | Thursday 05 February 2026 00:44:52 +0000 (0:00:00.613) 0:00:08.338 ***** 2026-02-05 00:49:13.283222 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.283231 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.283240 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.283703 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.283739 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.283748 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.283756 | orchestrator | 2026-02-05 00:49:13.283764 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-05 00:49:13.283773 | orchestrator | Thursday 05 February 2026 00:44:53 +0000 (0:00:00.763) 0:00:09.101 ***** 2026-02-05 00:49:13.283781 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 00:49:13.283789 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 00:49:13.283795 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.283803 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 00:49:13.283811 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 00:49:13.283818 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.283826 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 00:49:13.283834 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 00:49:13.283842 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.283850 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 00:49:13.283890 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 00:49:13.283900 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 00:49:13.283908 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 00:49:13.283916 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.283924 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.283955 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 00:49:13.283963 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 00:49:13.283971 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.283978 | orchestrator | 2026-02-05 00:49:13.283985 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-05 00:49:13.283992 | orchestrator | Thursday 05 February 2026 00:44:54 +0000 (0:00:00.752) 0:00:09.854 ***** 2026-02-05 00:49:13.284000 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.284007 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.284014 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.284022 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.284029 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.284036 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.284044 | orchestrator | 2026-02-05 00:49:13.284052 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-05 00:49:13.284062 | orchestrator | Thursday 05 February 2026 00:44:55 +0000 (0:00:01.337) 0:00:11.191 ***** 2026-02-05 00:49:13.284070 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:49:13.284078 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:49:13.284086 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:49:13.284094 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.284102 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.284110 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.284117 | orchestrator | 2026-02-05 00:49:13.284125 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-05 00:49:13.284133 | orchestrator | Thursday 05 February 2026 00:44:57 +0000 (0:00:01.882) 0:00:13.073 ***** 2026-02-05 00:49:13.284141 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.284150 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:49:13.284158 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:49:13.284166 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.284175 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.284183 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:49:13.284191 | orchestrator | 2026-02-05 00:49:13.284199 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-05 00:49:13.284207 | orchestrator | Thursday 05 February 2026 00:45:04 +0000 (0:00:06.516) 0:00:19.590 ***** 2026-02-05 00:49:13.284215 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.284223 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.284231 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.284240 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.284248 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.284257 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.284265 | orchestrator | 2026-02-05 00:49:13.284273 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-05 00:49:13.284282 | orchestrator | Thursday 05 February 2026 00:45:05 +0000 (0:00:01.260) 0:00:20.850 ***** 2026-02-05 00:49:13.284291 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.284299 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.284307 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.284315 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.284324 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.284332 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.284340 | orchestrator | 2026-02-05 00:49:13.284397 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-05 00:49:13.284411 | orchestrator | Thursday 05 February 2026 00:45:08 +0000 (0:00:02.867) 0:00:23.718 ***** 2026-02-05 00:49:13.284420 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.284428 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.284437 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.284445 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.284462 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.284471 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.284480 | orchestrator | 2026-02-05 00:49:13.284494 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-05 00:49:13.284503 | orchestrator | Thursday 05 February 2026 00:45:09 +0000 (0:00:00.893) 0:00:24.612 ***** 2026-02-05 00:49:13.284511 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-05 00:49:13.284520 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-05 00:49:13.284527 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.284535 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-05 00:49:13.284543 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-05 00:49:13.284550 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.284557 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-05 00:49:13.284564 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-05 00:49:13.284572 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.284580 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-05 00:49:13.284588 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-05 00:49:13.284597 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.284605 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-05 00:49:13.284613 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-05 00:49:13.284622 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-05 00:49:13.284630 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.284638 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-05 00:49:13.284647 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.284656 | orchestrator | 2026-02-05 00:49:13.284664 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-05 00:49:13.284685 | orchestrator | Thursday 05 February 2026 00:45:10 +0000 (0:00:01.124) 0:00:25.736 ***** 2026-02-05 00:49:13.284694 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.284702 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.284710 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.284719 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.284727 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.284736 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.284744 | orchestrator | 2026-02-05 00:49:13.284753 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-05 00:49:13.284761 | orchestrator | Thursday 05 February 2026 00:45:10 +0000 (0:00:00.535) 0:00:26.272 ***** 2026-02-05 00:49:13.284769 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.284777 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.284785 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.284794 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.284802 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.284810 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.284819 | orchestrator | 2026-02-05 00:49:13.284826 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-05 00:49:13.284834 | orchestrator | 2026-02-05 00:49:13.284842 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-05 00:49:13.284850 | orchestrator | Thursday 05 February 2026 00:45:12 +0000 (0:00:01.248) 0:00:27.520 ***** 2026-02-05 00:49:13.284858 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.284865 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.284873 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.284881 | orchestrator | 2026-02-05 00:49:13.284890 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-05 00:49:13.284898 | orchestrator | Thursday 05 February 2026 00:45:13 +0000 (0:00:01.369) 0:00:28.890 ***** 2026-02-05 00:49:13.284906 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.284920 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.284929 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.284937 | orchestrator | 2026-02-05 00:49:13.284945 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-05 00:49:13.284953 | orchestrator | Thursday 05 February 2026 00:45:15 +0000 (0:00:01.592) 0:00:30.483 ***** 2026-02-05 00:49:13.284962 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.284970 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.284979 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.284987 | orchestrator | 2026-02-05 00:49:13.284995 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-05 00:49:13.285004 | orchestrator | Thursday 05 February 2026 00:45:15 +0000 (0:00:00.905) 0:00:31.389 ***** 2026-02-05 00:49:13.285012 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.285020 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.285029 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.285037 | orchestrator | 2026-02-05 00:49:13.285046 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-05 00:49:13.285054 | orchestrator | Thursday 05 February 2026 00:45:16 +0000 (0:00:00.929) 0:00:32.318 ***** 2026-02-05 00:49:13.285063 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.285071 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.285080 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.285088 | orchestrator | 2026-02-05 00:49:13.285097 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-05 00:49:13.285105 | orchestrator | Thursday 05 February 2026 00:45:17 +0000 (0:00:00.340) 0:00:32.658 ***** 2026-02-05 00:49:13.285114 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.285122 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.285131 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.285139 | orchestrator | 2026-02-05 00:49:13.285147 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-05 00:49:13.285154 | orchestrator | Thursday 05 February 2026 00:45:18 +0000 (0:00:00.942) 0:00:33.600 ***** 2026-02-05 00:49:13.285162 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.285170 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.285177 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.285185 | orchestrator | 2026-02-05 00:49:13.285192 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-05 00:49:13.285200 | orchestrator | Thursday 05 February 2026 00:45:19 +0000 (0:00:01.492) 0:00:35.092 ***** 2026-02-05 00:49:13.285213 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:49:13.285221 | orchestrator | 2026-02-05 00:49:13.285229 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-05 00:49:13.285238 | orchestrator | Thursday 05 February 2026 00:45:20 +0000 (0:00:00.530) 0:00:35.623 ***** 2026-02-05 00:49:13.285246 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.285255 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.285263 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.285271 | orchestrator | 2026-02-05 00:49:13.285280 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-05 00:49:13.285288 | orchestrator | Thursday 05 February 2026 00:45:22 +0000 (0:00:02.100) 0:00:37.724 ***** 2026-02-05 00:49:13.285296 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.285319 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.285328 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.285337 | orchestrator | 2026-02-05 00:49:13.285345 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-05 00:49:13.285367 | orchestrator | Thursday 05 February 2026 00:45:22 +0000 (0:00:00.591) 0:00:38.315 ***** 2026-02-05 00:49:13.285375 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.285383 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.285392 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.285400 | orchestrator | 2026-02-05 00:49:13.285415 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-05 00:49:13.285424 | orchestrator | Thursday 05 February 2026 00:45:23 +0000 (0:00:01.005) 0:00:39.320 ***** 2026-02-05 00:49:13.285432 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.285441 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.285449 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.285457 | orchestrator | 2026-02-05 00:49:13.285466 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-05 00:49:13.285481 | orchestrator | Thursday 05 February 2026 00:45:25 +0000 (0:00:01.342) 0:00:40.662 ***** 2026-02-05 00:49:13.285490 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.285497 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.285504 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.285512 | orchestrator | 2026-02-05 00:49:13.285520 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-05 00:49:13.285527 | orchestrator | Thursday 05 February 2026 00:45:25 +0000 (0:00:00.470) 0:00:41.133 ***** 2026-02-05 00:49:13.285534 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.285541 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.285548 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.285556 | orchestrator | 2026-02-05 00:49:13.285563 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-05 00:49:13.285571 | orchestrator | Thursday 05 February 2026 00:45:26 +0000 (0:00:00.334) 0:00:41.467 ***** 2026-02-05 00:49:13.285579 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.285587 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.285594 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.285601 | orchestrator | 2026-02-05 00:49:13.285608 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-05 00:49:13.285616 | orchestrator | Thursday 05 February 2026 00:45:27 +0000 (0:00:01.762) 0:00:43.230 ***** 2026-02-05 00:49:13.285623 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.285630 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.285637 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.285644 | orchestrator | 2026-02-05 00:49:13.285652 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-05 00:49:13.285661 | orchestrator | Thursday 05 February 2026 00:45:31 +0000 (0:00:03.363) 0:00:46.594 ***** 2026-02-05 00:49:13.285669 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.285678 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.285686 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.285694 | orchestrator | 2026-02-05 00:49:13.285702 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-05 00:49:13.285710 | orchestrator | Thursday 05 February 2026 00:45:32 +0000 (0:00:00.901) 0:00:47.496 ***** 2026-02-05 00:49:13.285719 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-05 00:49:13.285728 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-05 00:49:13.285736 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-05 00:49:13.285745 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-05 00:49:13.285753 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-05 00:49:13.285762 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-05 00:49:13.285770 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-05 00:49:13.285785 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-05 00:49:13.285793 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-05 00:49:13.285802 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-05 00:49:13.285814 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-05 00:49:13.285823 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-05 00:49:13.285831 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.285840 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.285848 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.285856 | orchestrator | 2026-02-05 00:49:13.285865 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-05 00:49:13.285873 | orchestrator | Thursday 05 February 2026 00:46:15 +0000 (0:00:43.716) 0:01:31.212 ***** 2026-02-05 00:49:13.285881 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.285889 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.285897 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.285904 | orchestrator | 2026-02-05 00:49:13.285912 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-05 00:49:13.285920 | orchestrator | Thursday 05 February 2026 00:46:16 +0000 (0:00:00.320) 0:01:31.533 ***** 2026-02-05 00:49:13.285928 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.285935 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.285943 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.285950 | orchestrator | 2026-02-05 00:49:13.285956 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-05 00:49:13.285963 | orchestrator | Thursday 05 February 2026 00:46:17 +0000 (0:00:00.992) 0:01:32.525 ***** 2026-02-05 00:49:13.285971 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.285978 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.285984 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.285991 | orchestrator | 2026-02-05 00:49:13.286006 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-05 00:49:13.286072 | orchestrator | Thursday 05 February 2026 00:46:18 +0000 (0:00:01.230) 0:01:33.755 ***** 2026-02-05 00:49:13.286082 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.286089 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.286097 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.286104 | orchestrator | 2026-02-05 00:49:13.286111 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-05 00:49:13.286119 | orchestrator | Thursday 05 February 2026 00:46:45 +0000 (0:00:26.809) 0:02:00.565 ***** 2026-02-05 00:49:13.286126 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.286134 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.286142 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.286149 | orchestrator | 2026-02-05 00:49:13.286157 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-05 00:49:13.286164 | orchestrator | Thursday 05 February 2026 00:46:45 +0000 (0:00:00.705) 0:02:01.271 ***** 2026-02-05 00:49:13.286171 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.286178 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.286185 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.286193 | orchestrator | 2026-02-05 00:49:13.286200 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-05 00:49:13.286208 | orchestrator | Thursday 05 February 2026 00:46:46 +0000 (0:00:00.632) 0:02:01.903 ***** 2026-02-05 00:49:13.286215 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.286221 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.286235 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.286243 | orchestrator | 2026-02-05 00:49:13.286251 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-05 00:49:13.286259 | orchestrator | Thursday 05 February 2026 00:46:47 +0000 (0:00:00.625) 0:02:02.529 ***** 2026-02-05 00:49:13.286266 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.286274 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.286282 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.286290 | orchestrator | 2026-02-05 00:49:13.286298 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-05 00:49:13.286306 | orchestrator | Thursday 05 February 2026 00:46:47 +0000 (0:00:00.798) 0:02:03.327 ***** 2026-02-05 00:49:13.286313 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.286321 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.286328 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.286336 | orchestrator | 2026-02-05 00:49:13.286343 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-05 00:49:13.286366 | orchestrator | Thursday 05 February 2026 00:46:48 +0000 (0:00:00.263) 0:02:03.591 ***** 2026-02-05 00:49:13.286374 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.286382 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.286389 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.286397 | orchestrator | 2026-02-05 00:49:13.286405 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-05 00:49:13.286413 | orchestrator | Thursday 05 February 2026 00:46:48 +0000 (0:00:00.563) 0:02:04.154 ***** 2026-02-05 00:49:13.286421 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.286429 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.286436 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.286444 | orchestrator | 2026-02-05 00:49:13.286452 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-05 00:49:13.286460 | orchestrator | Thursday 05 February 2026 00:46:49 +0000 (0:00:00.586) 0:02:04.741 ***** 2026-02-05 00:49:13.286467 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.286475 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.286482 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.286490 | orchestrator | 2026-02-05 00:49:13.286498 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-05 00:49:13.286506 | orchestrator | Thursday 05 February 2026 00:46:50 +0000 (0:00:00.927) 0:02:05.669 ***** 2026-02-05 00:49:13.286514 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:13.286521 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:13.286529 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:13.286536 | orchestrator | 2026-02-05 00:49:13.286543 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-05 00:49:13.286550 | orchestrator | Thursday 05 February 2026 00:46:51 +0000 (0:00:00.797) 0:02:06.467 ***** 2026-02-05 00:49:13.286557 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.286571 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.286579 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.286587 | orchestrator | 2026-02-05 00:49:13.286594 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-05 00:49:13.286602 | orchestrator | Thursday 05 February 2026 00:46:51 +0000 (0:00:00.253) 0:02:06.720 ***** 2026-02-05 00:49:13.286609 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.286617 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.286624 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.286632 | orchestrator | 2026-02-05 00:49:13.286638 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-05 00:49:13.286646 | orchestrator | Thursday 05 February 2026 00:46:51 +0000 (0:00:00.265) 0:02:06.986 ***** 2026-02-05 00:49:13.286653 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.286661 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.286669 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.286676 | orchestrator | 2026-02-05 00:49:13.286691 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-05 00:49:13.286698 | orchestrator | Thursday 05 February 2026 00:46:52 +0000 (0:00:00.573) 0:02:07.560 ***** 2026-02-05 00:49:13.286706 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.286713 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.286720 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.286728 | orchestrator | 2026-02-05 00:49:13.286736 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-05 00:49:13.286745 | orchestrator | Thursday 05 February 2026 00:46:52 +0000 (0:00:00.728) 0:02:08.288 ***** 2026-02-05 00:49:13.286752 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-05 00:49:13.286767 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-05 00:49:13.286775 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-05 00:49:13.286783 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-05 00:49:13.286791 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-05 00:49:13.286798 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-05 00:49:13.286805 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-05 00:49:13.286813 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-05 00:49:13.286821 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-05 00:49:13.286829 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-05 00:49:13.286837 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-05 00:49:13.286846 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-05 00:49:13.286854 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-05 00:49:13.286862 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-05 00:49:13.286870 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-05 00:49:13.286877 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-05 00:49:13.286884 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-05 00:49:13.286892 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-05 00:49:13.286898 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-05 00:49:13.286906 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-05 00:49:13.286913 | orchestrator | 2026-02-05 00:49:13.286920 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-05 00:49:13.286928 | orchestrator | 2026-02-05 00:49:13.286936 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-05 00:49:13.286944 | orchestrator | Thursday 05 February 2026 00:46:56 +0000 (0:00:03.255) 0:02:11.543 ***** 2026-02-05 00:49:13.286953 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:49:13.286961 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:49:13.286968 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:49:13.286977 | orchestrator | 2026-02-05 00:49:13.286984 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-05 00:49:13.286993 | orchestrator | Thursday 05 February 2026 00:46:56 +0000 (0:00:00.290) 0:02:11.833 ***** 2026-02-05 00:49:13.287023 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:49:13.287032 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:49:13.287040 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:49:13.287049 | orchestrator | 2026-02-05 00:49:13.287058 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-05 00:49:13.287066 | orchestrator | Thursday 05 February 2026 00:46:57 +0000 (0:00:00.805) 0:02:12.639 ***** 2026-02-05 00:49:13.287074 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:49:13.287083 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:49:13.287092 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:49:13.287100 | orchestrator | 2026-02-05 00:49:13.287108 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-05 00:49:13.287116 | orchestrator | Thursday 05 February 2026 00:46:57 +0000 (0:00:00.268) 0:02:12.908 ***** 2026-02-05 00:49:13.287130 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:49:13.287140 | orchestrator | 2026-02-05 00:49:13.287147 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-05 00:49:13.287155 | orchestrator | Thursday 05 February 2026 00:46:57 +0000 (0:00:00.407) 0:02:13.316 ***** 2026-02-05 00:49:13.287163 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.287170 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.287177 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.287184 | orchestrator | 2026-02-05 00:49:13.287192 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-05 00:49:13.287200 | orchestrator | Thursday 05 February 2026 00:46:58 +0000 (0:00:00.361) 0:02:13.677 ***** 2026-02-05 00:49:13.287207 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.287215 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.287223 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.287231 | orchestrator | 2026-02-05 00:49:13.287239 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-05 00:49:13.287247 | orchestrator | Thursday 05 February 2026 00:46:58 +0000 (0:00:00.216) 0:02:13.894 ***** 2026-02-05 00:49:13.287256 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.287264 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.287273 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.287280 | orchestrator | 2026-02-05 00:49:13.287290 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-05 00:49:13.287298 | orchestrator | Thursday 05 February 2026 00:46:58 +0000 (0:00:00.220) 0:02:14.115 ***** 2026-02-05 00:49:13.287307 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:49:13.287315 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:49:13.287323 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:49:13.287332 | orchestrator | 2026-02-05 00:49:13.287369 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-05 00:49:13.287381 | orchestrator | Thursday 05 February 2026 00:46:59 +0000 (0:00:00.597) 0:02:14.713 ***** 2026-02-05 00:49:13.287389 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:49:13.287396 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:49:13.287404 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:49:13.287412 | orchestrator | 2026-02-05 00:49:13.287420 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-05 00:49:13.287427 | orchestrator | Thursday 05 February 2026 00:47:00 +0000 (0:00:01.312) 0:02:16.025 ***** 2026-02-05 00:49:13.287434 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:49:13.287441 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:49:13.287449 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:49:13.287456 | orchestrator | 2026-02-05 00:49:13.287464 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-05 00:49:13.287472 | orchestrator | Thursday 05 February 2026 00:47:01 +0000 (0:00:01.298) 0:02:17.323 ***** 2026-02-05 00:49:13.287480 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:49:13.287489 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:49:13.287507 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:49:13.287515 | orchestrator | 2026-02-05 00:49:13.287524 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-05 00:49:13.287531 | orchestrator | 2026-02-05 00:49:13.287539 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-05 00:49:13.287547 | orchestrator | Thursday 05 February 2026 00:47:13 +0000 (0:00:11.221) 0:02:28.545 ***** 2026-02-05 00:49:13.287554 | orchestrator | ok: [testbed-manager] 2026-02-05 00:49:13.287561 | orchestrator | 2026-02-05 00:49:13.287568 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-05 00:49:13.287577 | orchestrator | Thursday 05 February 2026 00:47:14 +0000 (0:00:01.167) 0:02:29.712 ***** 2026-02-05 00:49:13.287585 | orchestrator | changed: [testbed-manager] 2026-02-05 00:49:13.287593 | orchestrator | 2026-02-05 00:49:13.287601 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-05 00:49:13.287608 | orchestrator | Thursday 05 February 2026 00:47:14 +0000 (0:00:00.410) 0:02:30.122 ***** 2026-02-05 00:49:13.287618 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-05 00:49:13.287627 | orchestrator | 2026-02-05 00:49:13.287635 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-05 00:49:13.287643 | orchestrator | Thursday 05 February 2026 00:47:15 +0000 (0:00:00.540) 0:02:30.662 ***** 2026-02-05 00:49:13.287650 | orchestrator | changed: [testbed-manager] 2026-02-05 00:49:13.287657 | orchestrator | 2026-02-05 00:49:13.287664 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-05 00:49:13.287671 | orchestrator | Thursday 05 February 2026 00:47:16 +0000 (0:00:00.885) 0:02:31.548 ***** 2026-02-05 00:49:13.287678 | orchestrator | changed: [testbed-manager] 2026-02-05 00:49:13.287686 | orchestrator | 2026-02-05 00:49:13.287693 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-05 00:49:13.287700 | orchestrator | Thursday 05 February 2026 00:47:16 +0000 (0:00:00.501) 0:02:32.050 ***** 2026-02-05 00:49:13.287708 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 00:49:13.287715 | orchestrator | 2026-02-05 00:49:13.287722 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-05 00:49:13.287730 | orchestrator | Thursday 05 February 2026 00:47:18 +0000 (0:00:01.534) 0:02:33.585 ***** 2026-02-05 00:49:13.287737 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 00:49:13.287744 | orchestrator | 2026-02-05 00:49:13.287751 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-05 00:49:13.287759 | orchestrator | Thursday 05 February 2026 00:47:19 +0000 (0:00:00.893) 0:02:34.478 ***** 2026-02-05 00:49:13.287766 | orchestrator | changed: [testbed-manager] 2026-02-05 00:49:13.287773 | orchestrator | 2026-02-05 00:49:13.287780 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-05 00:49:13.287788 | orchestrator | Thursday 05 February 2026 00:47:19 +0000 (0:00:00.385) 0:02:34.864 ***** 2026-02-05 00:49:13.287795 | orchestrator | changed: [testbed-manager] 2026-02-05 00:49:13.287802 | orchestrator | 2026-02-05 00:49:13.287809 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-05 00:49:13.287817 | orchestrator | 2026-02-05 00:49:13.287831 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-05 00:49:13.287839 | orchestrator | Thursday 05 February 2026 00:47:19 +0000 (0:00:00.463) 0:02:35.327 ***** 2026-02-05 00:49:13.287846 | orchestrator | ok: [testbed-manager] 2026-02-05 00:49:13.287853 | orchestrator | 2026-02-05 00:49:13.287861 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-05 00:49:13.287868 | orchestrator | Thursday 05 February 2026 00:47:20 +0000 (0:00:00.363) 0:02:35.691 ***** 2026-02-05 00:49:13.287876 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 00:49:13.287883 | orchestrator | 2026-02-05 00:49:13.287891 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-05 00:49:13.287908 | orchestrator | Thursday 05 February 2026 00:47:20 +0000 (0:00:00.219) 0:02:35.911 ***** 2026-02-05 00:49:13.287917 | orchestrator | ok: [testbed-manager] 2026-02-05 00:49:13.287923 | orchestrator | 2026-02-05 00:49:13.287930 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-05 00:49:13.287936 | orchestrator | Thursday 05 February 2026 00:47:21 +0000 (0:00:00.788) 0:02:36.699 ***** 2026-02-05 00:49:13.287943 | orchestrator | ok: [testbed-manager] 2026-02-05 00:49:13.287949 | orchestrator | 2026-02-05 00:49:13.287955 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-05 00:49:13.287962 | orchestrator | Thursday 05 February 2026 00:47:22 +0000 (0:00:01.154) 0:02:37.854 ***** 2026-02-05 00:49:13.287968 | orchestrator | changed: [testbed-manager] 2026-02-05 00:49:13.287973 | orchestrator | 2026-02-05 00:49:13.287980 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-05 00:49:13.287986 | orchestrator | Thursday 05 February 2026 00:47:23 +0000 (0:00:00.676) 0:02:38.530 ***** 2026-02-05 00:49:13.287993 | orchestrator | ok: [testbed-manager] 2026-02-05 00:49:13.288000 | orchestrator | 2026-02-05 00:49:13.288014 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-05 00:49:13.288022 | orchestrator | Thursday 05 February 2026 00:47:23 +0000 (0:00:00.328) 0:02:38.858 ***** 2026-02-05 00:49:13.288028 | orchestrator | changed: [testbed-manager] 2026-02-05 00:49:13.288035 | orchestrator | 2026-02-05 00:49:13.288042 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-05 00:49:13.288049 | orchestrator | Thursday 05 February 2026 00:47:29 +0000 (0:00:06.334) 0:02:45.193 ***** 2026-02-05 00:49:13.288056 | orchestrator | changed: [testbed-manager] 2026-02-05 00:49:13.288063 | orchestrator | 2026-02-05 00:49:13.288070 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-05 00:49:13.288077 | orchestrator | Thursday 05 February 2026 00:47:45 +0000 (0:00:15.613) 0:03:00.807 ***** 2026-02-05 00:49:13.288084 | orchestrator | ok: [testbed-manager] 2026-02-05 00:49:13.288091 | orchestrator | 2026-02-05 00:49:13.288098 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-05 00:49:13.288105 | orchestrator | 2026-02-05 00:49:13.288112 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-05 00:49:13.288120 | orchestrator | Thursday 05 February 2026 00:47:46 +0000 (0:00:00.699) 0:03:01.507 ***** 2026-02-05 00:49:13.288127 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.288135 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.288142 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.288149 | orchestrator | 2026-02-05 00:49:13.288156 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-05 00:49:13.288164 | orchestrator | Thursday 05 February 2026 00:47:46 +0000 (0:00:00.294) 0:03:01.801 ***** 2026-02-05 00:49:13.288171 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.288179 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.288186 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.288193 | orchestrator | 2026-02-05 00:49:13.288200 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-05 00:49:13.288207 | orchestrator | Thursday 05 February 2026 00:47:46 +0000 (0:00:00.327) 0:03:02.129 ***** 2026-02-05 00:49:13.288214 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:49:13.288222 | orchestrator | 2026-02-05 00:49:13.288229 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-05 00:49:13.288236 | orchestrator | Thursday 05 February 2026 00:47:47 +0000 (0:00:00.692) 0:03:02.821 ***** 2026-02-05 00:49:13.288243 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 00:49:13.288250 | orchestrator | 2026-02-05 00:49:13.288258 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-05 00:49:13.288265 | orchestrator | Thursday 05 February 2026 00:47:48 +0000 (0:00:00.799) 0:03:03.620 ***** 2026-02-05 00:49:13.288272 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 00:49:13.288286 | orchestrator | 2026-02-05 00:49:13.288293 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-05 00:49:13.288300 | orchestrator | Thursday 05 February 2026 00:47:48 +0000 (0:00:00.725) 0:03:04.346 ***** 2026-02-05 00:49:13.288306 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.288312 | orchestrator | 2026-02-05 00:49:13.288318 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-05 00:49:13.288324 | orchestrator | Thursday 05 February 2026 00:47:49 +0000 (0:00:00.114) 0:03:04.461 ***** 2026-02-05 00:49:13.288330 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 00:49:13.288337 | orchestrator | 2026-02-05 00:49:13.288344 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-05 00:49:13.288385 | orchestrator | Thursday 05 February 2026 00:47:50 +0000 (0:00:01.006) 0:03:05.467 ***** 2026-02-05 00:49:13.288393 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.288401 | orchestrator | 2026-02-05 00:49:13.288408 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-05 00:49:13.288415 | orchestrator | Thursday 05 February 2026 00:47:50 +0000 (0:00:00.108) 0:03:05.576 ***** 2026-02-05 00:49:13.288422 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.288430 | orchestrator | 2026-02-05 00:49:13.288437 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-05 00:49:13.288450 | orchestrator | Thursday 05 February 2026 00:47:50 +0000 (0:00:00.111) 0:03:05.687 ***** 2026-02-05 00:49:13.288457 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.288464 | orchestrator | 2026-02-05 00:49:13.288472 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-05 00:49:13.288481 | orchestrator | Thursday 05 February 2026 00:47:50 +0000 (0:00:00.198) 0:03:05.886 ***** 2026-02-05 00:49:13.288488 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.288496 | orchestrator | 2026-02-05 00:49:13.288504 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-05 00:49:13.288511 | orchestrator | Thursday 05 February 2026 00:47:50 +0000 (0:00:00.147) 0:03:06.033 ***** 2026-02-05 00:49:13.288518 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 00:49:13.288525 | orchestrator | 2026-02-05 00:49:13.288533 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-05 00:49:13.288540 | orchestrator | Thursday 05 February 2026 00:47:55 +0000 (0:00:04.665) 0:03:10.699 ***** 2026-02-05 00:49:13.288547 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-05 00:49:13.288553 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-05 00:49:13.288561 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-05 00:49:13.288568 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-05 00:49:13.288576 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-05 00:49:13.288584 | orchestrator | 2026-02-05 00:49:13.288592 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-05 00:49:13.288599 | orchestrator | Thursday 05 February 2026 00:48:45 +0000 (0:00:50.509) 0:04:01.209 ***** 2026-02-05 00:49:13.288615 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 00:49:13.288622 | orchestrator | 2026-02-05 00:49:13.288629 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-05 00:49:13.288637 | orchestrator | Thursday 05 February 2026 00:48:48 +0000 (0:00:02.228) 0:04:03.437 ***** 2026-02-05 00:49:13.288645 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 00:49:13.288652 | orchestrator | 2026-02-05 00:49:13.288660 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-05 00:49:13.288668 | orchestrator | Thursday 05 February 2026 00:48:49 +0000 (0:00:01.733) 0:04:05.170 ***** 2026-02-05 00:49:13.288676 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 00:49:13.288690 | orchestrator | 2026-02-05 00:49:13.288699 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-05 00:49:13.288706 | orchestrator | Thursday 05 February 2026 00:48:50 +0000 (0:00:01.087) 0:04:06.258 ***** 2026-02-05 00:49:13.288713 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.288720 | orchestrator | 2026-02-05 00:49:13.288728 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-05 00:49:13.288737 | orchestrator | Thursday 05 February 2026 00:48:50 +0000 (0:00:00.128) 0:04:06.386 ***** 2026-02-05 00:49:13.288745 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-05 00:49:13.288753 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-05 00:49:13.288761 | orchestrator | 2026-02-05 00:49:13.288768 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-05 00:49:13.288774 | orchestrator | Thursday 05 February 2026 00:48:52 +0000 (0:00:01.667) 0:04:08.054 ***** 2026-02-05 00:49:13.288782 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.288789 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.288796 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.288803 | orchestrator | 2026-02-05 00:49:13.288810 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-05 00:49:13.288816 | orchestrator | Thursday 05 February 2026 00:48:52 +0000 (0:00:00.300) 0:04:08.354 ***** 2026-02-05 00:49:13.288824 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.288831 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.288837 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.288845 | orchestrator | 2026-02-05 00:49:13.288853 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-05 00:49:13.288860 | orchestrator | 2026-02-05 00:49:13.288867 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-05 00:49:13.288875 | orchestrator | Thursday 05 February 2026 00:48:53 +0000 (0:00:01.005) 0:04:09.360 ***** 2026-02-05 00:49:13.288882 | orchestrator | ok: [testbed-manager] 2026-02-05 00:49:13.288889 | orchestrator | 2026-02-05 00:49:13.288897 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-05 00:49:13.288904 | orchestrator | Thursday 05 February 2026 00:48:54 +0000 (0:00:00.131) 0:04:09.492 ***** 2026-02-05 00:49:13.288912 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 00:49:13.288918 | orchestrator | 2026-02-05 00:49:13.288925 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-05 00:49:13.288931 | orchestrator | Thursday 05 February 2026 00:48:54 +0000 (0:00:00.202) 0:04:09.695 ***** 2026-02-05 00:49:13.288938 | orchestrator | changed: [testbed-manager] 2026-02-05 00:49:13.288958 | orchestrator | 2026-02-05 00:49:13.288966 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-05 00:49:13.288973 | orchestrator | 2026-02-05 00:49:13.288979 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-05 00:49:13.288986 | orchestrator | Thursday 05 February 2026 00:48:59 +0000 (0:00:05.366) 0:04:15.061 ***** 2026-02-05 00:49:13.288992 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:49:13.288999 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:49:13.289006 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:49:13.289013 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:13.289020 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:13.289027 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:13.289034 | orchestrator | 2026-02-05 00:49:13.289041 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-05 00:49:13.289052 | orchestrator | Thursday 05 February 2026 00:49:00 +0000 (0:00:00.738) 0:04:15.800 ***** 2026-02-05 00:49:13.289059 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-05 00:49:13.289066 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-05 00:49:13.289079 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-05 00:49:13.289085 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-05 00:49:13.289092 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-05 00:49:13.289099 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-05 00:49:13.289105 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-05 00:49:13.289111 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-05 00:49:13.289118 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-05 00:49:13.289125 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-05 00:49:13.289131 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-05 00:49:13.289138 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-05 00:49:13.289152 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-05 00:49:13.289161 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-05 00:49:13.289171 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-05 00:49:13.289180 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-05 00:49:13.289190 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-05 00:49:13.289199 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-05 00:49:13.289208 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-05 00:49:13.289217 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-05 00:49:13.289226 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-05 00:49:13.289235 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-05 00:49:13.289244 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-05 00:49:13.289253 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-05 00:49:13.289262 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-05 00:49:13.289271 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-05 00:49:13.289280 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-05 00:49:13.289288 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-05 00:49:13.289298 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-05 00:49:13.289307 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-05 00:49:13.289316 | orchestrator | 2026-02-05 00:49:13.289324 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-05 00:49:13.289333 | orchestrator | Thursday 05 February 2026 00:49:11 +0000 (0:00:10.931) 0:04:26.732 ***** 2026-02-05 00:49:13.289342 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.289389 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.289399 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.289408 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.289417 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.289425 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.289435 | orchestrator | 2026-02-05 00:49:13.289443 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-05 00:49:13.289458 | orchestrator | Thursday 05 February 2026 00:49:11 +0000 (0:00:00.504) 0:04:27.237 ***** 2026-02-05 00:49:13.289467 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:13.289477 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:13.289486 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:13.289495 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:13.289504 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:13.289514 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:13.289522 | orchestrator | 2026-02-05 00:49:13.289532 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:49:13.289541 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:49:13.289549 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-05 00:49:13.289560 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-05 00:49:13.289568 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-05 00:49:13.289577 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 00:49:13.289587 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 00:49:13.289596 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 00:49:13.289604 | orchestrator | 2026-02-05 00:49:13.289613 | orchestrator | 2026-02-05 00:49:13.289621 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:49:13.289627 | orchestrator | Thursday 05 February 2026 00:49:12 +0000 (0:00:00.412) 0:04:27.649 ***** 2026-02-05 00:49:13.289634 | orchestrator | =============================================================================== 2026-02-05 00:49:13.289640 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 50.51s 2026-02-05 00:49:13.289646 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.72s 2026-02-05 00:49:13.289653 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.81s 2026-02-05 00:49:13.289665 | orchestrator | kubectl : Install required packages ------------------------------------ 15.61s 2026-02-05 00:49:13.289671 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.22s 2026-02-05 00:49:13.289678 | orchestrator | Manage labels ---------------------------------------------------------- 10.93s 2026-02-05 00:49:13.289684 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.52s 2026-02-05 00:49:13.289690 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.33s 2026-02-05 00:49:13.289696 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.37s 2026-02-05 00:49:13.289703 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.67s 2026-02-05 00:49:13.289708 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.36s 2026-02-05 00:49:13.289715 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.26s 2026-02-05 00:49:13.289721 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.87s 2026-02-05 00:49:13.289728 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 2.23s 2026-02-05 00:49:13.289734 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.10s 2026-02-05 00:49:13.289744 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.08s 2026-02-05 00:49:13.289750 | orchestrator | k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries --- 1.88s 2026-02-05 00:49:13.289756 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.88s 2026-02-05 00:49:13.289762 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.76s 2026-02-05 00:49:13.289769 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.73s 2026-02-05 00:49:13.289775 | orchestrator | 2026-02-05 00:49:13 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:13.289781 | orchestrator | 2026-02-05 00:49:13 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:49:13.289788 | orchestrator | 2026-02-05 00:49:13 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:13.289794 | orchestrator | 2026-02-05 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:16.325911 | orchestrator | 2026-02-05 00:49:16 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:16.325992 | orchestrator | 2026-02-05 00:49:16 | INFO  | Task c0cb2090-515b-450b-8c02-4f49a528e897 is in state STARTED 2026-02-05 00:49:16.326536 | orchestrator | 2026-02-05 00:49:16 | INFO  | Task 9a0acab6-fd94-4ac7-a7a2-afda7f410fce is in state STARTED 2026-02-05 00:49:16.327031 | orchestrator | 2026-02-05 00:49:16 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:16.327671 | orchestrator | 2026-02-05 00:49:16 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:49:16.328093 | orchestrator | 2026-02-05 00:49:16 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:16.328128 | orchestrator | 2026-02-05 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:19.361592 | orchestrator | 2026-02-05 00:49:19 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:19.361690 | orchestrator | 2026-02-05 00:49:19 | INFO  | Task c0cb2090-515b-450b-8c02-4f49a528e897 is in state STARTED 2026-02-05 00:49:19.361757 | orchestrator | 2026-02-05 00:49:19 | INFO  | Task 9a0acab6-fd94-4ac7-a7a2-afda7f410fce is in state SUCCESS 2026-02-05 00:49:19.362311 | orchestrator | 2026-02-05 00:49:19 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:19.362973 | orchestrator | 2026-02-05 00:49:19 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:49:19.365499 | orchestrator | 2026-02-05 00:49:19 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:19.365531 | orchestrator | 2026-02-05 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:22.396112 | orchestrator | 2026-02-05 00:49:22 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:22.396198 | orchestrator | 2026-02-05 00:49:22 | INFO  | Task c0cb2090-515b-450b-8c02-4f49a528e897 is in state STARTED 2026-02-05 00:49:22.396587 | orchestrator | 2026-02-05 00:49:22 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:22.397438 | orchestrator | 2026-02-05 00:49:22 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:49:22.398755 | orchestrator | 2026-02-05 00:49:22 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:22.398781 | orchestrator | 2026-02-05 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:25.432514 | orchestrator | 2026-02-05 00:49:25 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:25.432591 | orchestrator | 2026-02-05 00:49:25 | INFO  | Task c0cb2090-515b-450b-8c02-4f49a528e897 is in state SUCCESS 2026-02-05 00:49:25.432756 | orchestrator | 2026-02-05 00:49:25 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:25.433391 | orchestrator | 2026-02-05 00:49:25 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:49:25.433860 | orchestrator | 2026-02-05 00:49:25 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:25.433877 | orchestrator | 2026-02-05 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:28.460647 | orchestrator | 2026-02-05 00:49:28 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:28.461107 | orchestrator | 2026-02-05 00:49:28 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:28.461707 | orchestrator | 2026-02-05 00:49:28 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:49:28.462447 | orchestrator | 2026-02-05 00:49:28 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:28.462499 | orchestrator | 2026-02-05 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:31.506937 | orchestrator | 2026-02-05 00:49:31 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:31.507383 | orchestrator | 2026-02-05 00:49:31 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:31.508486 | orchestrator | 2026-02-05 00:49:31 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:49:31.509118 | orchestrator | 2026-02-05 00:49:31 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:31.509145 | orchestrator | 2026-02-05 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:34.546644 | orchestrator | 2026-02-05 00:49:34 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:34.547029 | orchestrator | 2026-02-05 00:49:34 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:34.547782 | orchestrator | 2026-02-05 00:49:34 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state STARTED 2026-02-05 00:49:34.548518 | orchestrator | 2026-02-05 00:49:34 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:34.548544 | orchestrator | 2026-02-05 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:37.574936 | orchestrator | 2026-02-05 00:49:37 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:37.575088 | orchestrator | 2026-02-05 00:49:37 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:37.575829 | orchestrator | 2026-02-05 00:49:37 | INFO  | Task 47a1d4a9-dd13-4e18-925b-fcbf48a8c225 is in state SUCCESS 2026-02-05 00:49:37.577277 | orchestrator | 2026-02-05 00:49:37.577320 | orchestrator | 2026-02-05 00:49:37.577331 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-05 00:49:37.577336 | orchestrator | 2026-02-05 00:49:37.577340 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-05 00:49:37.577345 | orchestrator | Thursday 05 February 2026 00:49:16 +0000 (0:00:00.202) 0:00:00.202 ***** 2026-02-05 00:49:37.577349 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-05 00:49:37.577353 | orchestrator | 2026-02-05 00:49:37.577357 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-05 00:49:37.577374 | orchestrator | Thursday 05 February 2026 00:49:17 +0000 (0:00:00.737) 0:00:00.940 ***** 2026-02-05 00:49:37.577378 | orchestrator | changed: [testbed-manager] 2026-02-05 00:49:37.577383 | orchestrator | 2026-02-05 00:49:37.577386 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-05 00:49:37.577390 | orchestrator | Thursday 05 February 2026 00:49:18 +0000 (0:00:01.022) 0:00:01.962 ***** 2026-02-05 00:49:37.577394 | orchestrator | changed: [testbed-manager] 2026-02-05 00:49:37.577398 | orchestrator | 2026-02-05 00:49:37.577401 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:49:37.577405 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:49:37.577411 | orchestrator | 2026-02-05 00:49:37.577415 | orchestrator | 2026-02-05 00:49:37.577419 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:49:37.577422 | orchestrator | Thursday 05 February 2026 00:49:19 +0000 (0:00:00.415) 0:00:02.378 ***** 2026-02-05 00:49:37.577426 | orchestrator | =============================================================================== 2026-02-05 00:49:37.577430 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.02s 2026-02-05 00:49:37.577434 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.74s 2026-02-05 00:49:37.577438 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.42s 2026-02-05 00:49:37.577441 | orchestrator | 2026-02-05 00:49:37.577445 | orchestrator | 2026-02-05 00:49:37.577449 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-05 00:49:37.577453 | orchestrator | 2026-02-05 00:49:37.577456 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-05 00:49:37.577460 | orchestrator | Thursday 05 February 2026 00:49:15 +0000 (0:00:00.171) 0:00:00.171 ***** 2026-02-05 00:49:37.577464 | orchestrator | ok: [testbed-manager] 2026-02-05 00:49:37.577469 | orchestrator | 2026-02-05 00:49:37.577473 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-05 00:49:37.577477 | orchestrator | Thursday 05 February 2026 00:49:16 +0000 (0:00:00.598) 0:00:00.770 ***** 2026-02-05 00:49:37.577480 | orchestrator | ok: [testbed-manager] 2026-02-05 00:49:37.577484 | orchestrator | 2026-02-05 00:49:37.577488 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-05 00:49:37.577492 | orchestrator | Thursday 05 February 2026 00:49:17 +0000 (0:00:00.533) 0:00:01.303 ***** 2026-02-05 00:49:37.577496 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-05 00:49:37.577499 | orchestrator | 2026-02-05 00:49:37.577503 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-05 00:49:37.577507 | orchestrator | Thursday 05 February 2026 00:49:17 +0000 (0:00:00.733) 0:00:02.036 ***** 2026-02-05 00:49:37.577511 | orchestrator | changed: [testbed-manager] 2026-02-05 00:49:37.577514 | orchestrator | 2026-02-05 00:49:37.577518 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-05 00:49:37.577522 | orchestrator | Thursday 05 February 2026 00:49:19 +0000 (0:00:01.271) 0:00:03.308 ***** 2026-02-05 00:49:37.577526 | orchestrator | changed: [testbed-manager] 2026-02-05 00:49:37.577529 | orchestrator | 2026-02-05 00:49:37.577533 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-05 00:49:37.577537 | orchestrator | Thursday 05 February 2026 00:49:19 +0000 (0:00:00.481) 0:00:03.789 ***** 2026-02-05 00:49:37.577541 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 00:49:37.577544 | orchestrator | 2026-02-05 00:49:37.577548 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-05 00:49:37.577552 | orchestrator | Thursday 05 February 2026 00:49:21 +0000 (0:00:01.440) 0:00:05.230 ***** 2026-02-05 00:49:37.577556 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 00:49:37.577559 | orchestrator | 2026-02-05 00:49:37.577563 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-05 00:49:37.577570 | orchestrator | Thursday 05 February 2026 00:49:21 +0000 (0:00:00.739) 0:00:05.969 ***** 2026-02-05 00:49:37.577574 | orchestrator | ok: [testbed-manager] 2026-02-05 00:49:37.577578 | orchestrator | 2026-02-05 00:49:37.577581 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-05 00:49:37.577585 | orchestrator | Thursday 05 February 2026 00:49:22 +0000 (0:00:00.410) 0:00:06.380 ***** 2026-02-05 00:49:37.577589 | orchestrator | ok: [testbed-manager] 2026-02-05 00:49:37.577592 | orchestrator | 2026-02-05 00:49:37.577596 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:49:37.577600 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:49:37.577604 | orchestrator | 2026-02-05 00:49:37.577607 | orchestrator | 2026-02-05 00:49:37.577611 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:49:37.577615 | orchestrator | Thursday 05 February 2026 00:49:22 +0000 (0:00:00.291) 0:00:06.671 ***** 2026-02-05 00:49:37.577619 | orchestrator | =============================================================================== 2026-02-05 00:49:37.577622 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.44s 2026-02-05 00:49:37.577626 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.27s 2026-02-05 00:49:37.577630 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.74s 2026-02-05 00:49:37.577640 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.73s 2026-02-05 00:49:37.577644 | orchestrator | Get home directory of operator user ------------------------------------- 0.60s 2026-02-05 00:49:37.577648 | orchestrator | Create .kube directory -------------------------------------------------- 0.53s 2026-02-05 00:49:37.577652 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.48s 2026-02-05 00:49:37.577655 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.41s 2026-02-05 00:49:37.577659 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.29s 2026-02-05 00:49:37.577663 | orchestrator | 2026-02-05 00:49:37.577667 | orchestrator | 2026-02-05 00:49:37.577670 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-02-05 00:49:37.577674 | orchestrator | 2026-02-05 00:49:37.577678 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-05 00:49:37.577681 | orchestrator | Thursday 05 February 2026 00:47:15 +0000 (0:00:00.100) 0:00:00.100 ***** 2026-02-05 00:49:37.577685 | orchestrator | ok: [localhost] => { 2026-02-05 00:49:37.577689 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-02-05 00:49:37.577693 | orchestrator | } 2026-02-05 00:49:37.577698 | orchestrator | 2026-02-05 00:49:37.577701 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-02-05 00:49:37.577705 | orchestrator | Thursday 05 February 2026 00:47:15 +0000 (0:00:00.053) 0:00:00.154 ***** 2026-02-05 00:49:37.577710 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-02-05 00:49:37.577715 | orchestrator | ...ignoring 2026-02-05 00:49:37.577719 | orchestrator | 2026-02-05 00:49:37.577723 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-02-05 00:49:37.577727 | orchestrator | Thursday 05 February 2026 00:47:18 +0000 (0:00:03.275) 0:00:03.430 ***** 2026-02-05 00:49:37.577731 | orchestrator | skipping: [localhost] 2026-02-05 00:49:37.577734 | orchestrator | 2026-02-05 00:49:37.577738 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-02-05 00:49:37.577742 | orchestrator | Thursday 05 February 2026 00:47:18 +0000 (0:00:00.250) 0:00:03.680 ***** 2026-02-05 00:49:37.577745 | orchestrator | ok: [localhost] 2026-02-05 00:49:37.577749 | orchestrator | 2026-02-05 00:49:37.577753 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:49:37.577759 | orchestrator | 2026-02-05 00:49:37.577763 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:49:37.577767 | orchestrator | Thursday 05 February 2026 00:47:18 +0000 (0:00:00.316) 0:00:03.997 ***** 2026-02-05 00:49:37.577771 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:37.577774 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:37.577778 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:37.577782 | orchestrator | 2026-02-05 00:49:37.577785 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:49:37.577789 | orchestrator | Thursday 05 February 2026 00:47:19 +0000 (0:00:00.816) 0:00:04.813 ***** 2026-02-05 00:49:37.577793 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-05 00:49:37.577796 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-05 00:49:37.577800 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-05 00:49:37.577804 | orchestrator | 2026-02-05 00:49:37.577808 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-05 00:49:37.577811 | orchestrator | 2026-02-05 00:49:37.577815 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-05 00:49:37.577819 | orchestrator | Thursday 05 February 2026 00:47:20 +0000 (0:00:00.592) 0:00:05.406 ***** 2026-02-05 00:49:37.577823 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:49:37.577826 | orchestrator | 2026-02-05 00:49:37.577830 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-05 00:49:37.577864 | orchestrator | Thursday 05 February 2026 00:47:21 +0000 (0:00:00.959) 0:00:06.365 ***** 2026-02-05 00:49:37.577872 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:37.577876 | orchestrator | 2026-02-05 00:49:37.577880 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-05 00:49:37.577884 | orchestrator | Thursday 05 February 2026 00:47:22 +0000 (0:00:01.447) 0:00:07.812 ***** 2026-02-05 00:49:37.577888 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:37.577891 | orchestrator | 2026-02-05 00:49:37.577895 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-05 00:49:37.577899 | orchestrator | Thursday 05 February 2026 00:47:23 +0000 (0:00:00.412) 0:00:08.225 ***** 2026-02-05 00:49:37.577902 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:37.577906 | orchestrator | 2026-02-05 00:49:37.577910 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-05 00:49:37.577914 | orchestrator | Thursday 05 February 2026 00:47:23 +0000 (0:00:00.412) 0:00:08.637 ***** 2026-02-05 00:49:37.577917 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:37.577921 | orchestrator | 2026-02-05 00:49:37.577925 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-05 00:49:37.577928 | orchestrator | Thursday 05 February 2026 00:47:23 +0000 (0:00:00.315) 0:00:08.952 ***** 2026-02-05 00:49:37.577932 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:37.577936 | orchestrator | 2026-02-05 00:49:37.577940 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-05 00:49:37.577943 | orchestrator | Thursday 05 February 2026 00:47:24 +0000 (0:00:00.407) 0:00:09.360 ***** 2026-02-05 00:49:37.577947 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:49:37.577951 | orchestrator | 2026-02-05 00:49:37.577955 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-05 00:49:37.577964 | orchestrator | Thursday 05 February 2026 00:47:25 +0000 (0:00:01.078) 0:00:10.438 ***** 2026-02-05 00:49:37.577968 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:37.577972 | orchestrator | 2026-02-05 00:49:37.577975 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-05 00:49:37.577979 | orchestrator | Thursday 05 February 2026 00:47:26 +0000 (0:00:00.941) 0:00:11.379 ***** 2026-02-05 00:49:37.577986 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:37.577989 | orchestrator | 2026-02-05 00:49:37.577993 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-05 00:49:37.577997 | orchestrator | Thursday 05 February 2026 00:47:26 +0000 (0:00:00.345) 0:00:11.725 ***** 2026-02-05 00:49:37.578000 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:37.578004 | orchestrator | 2026-02-05 00:49:37.578008 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-05 00:49:37.578061 | orchestrator | Thursday 05 February 2026 00:47:26 +0000 (0:00:00.343) 0:00:12.068 ***** 2026-02-05 00:49:37.578069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:49:37.578076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:49:37.578081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:49:37.578087 | orchestrator | 2026-02-05 00:49:37.578094 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-05 00:49:37.578100 | orchestrator | Thursday 05 February 2026 00:47:28 +0000 (0:00:01.133) 0:00:13.201 ***** 2026-02-05 00:49:37.578122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:49:37.578133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:49:37.578140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:49:37.578147 | orchestrator | 2026-02-05 00:49:37.578153 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-05 00:49:37.578160 | orchestrator | Thursday 05 February 2026 00:47:31 +0000 (0:00:03.466) 0:00:16.668 ***** 2026-02-05 00:49:37.578166 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-05 00:49:37.578173 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-05 00:49:37.578179 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-05 00:49:37.578185 | orchestrator | 2026-02-05 00:49:37.578192 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-05 00:49:37.578202 | orchestrator | Thursday 05 February 2026 00:47:33 +0000 (0:00:02.306) 0:00:18.975 ***** 2026-02-05 00:49:37.578208 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-05 00:49:37.578214 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-05 00:49:37.578220 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-05 00:49:37.578227 | orchestrator | 2026-02-05 00:49:37.578238 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-05 00:49:37.578244 | orchestrator | Thursday 05 February 2026 00:47:36 +0000 (0:00:02.429) 0:00:21.405 ***** 2026-02-05 00:49:37.578250 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-05 00:49:37.578256 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-05 00:49:37.578262 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-05 00:49:37.578268 | orchestrator | 2026-02-05 00:49:37.578274 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-05 00:49:37.578280 | orchestrator | Thursday 05 February 2026 00:47:38 +0000 (0:00:01.785) 0:00:23.190 ***** 2026-02-05 00:49:37.578286 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-05 00:49:37.578332 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-05 00:49:37.578338 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-05 00:49:37.578347 | orchestrator | 2026-02-05 00:49:37.578351 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-05 00:49:37.578355 | orchestrator | Thursday 05 February 2026 00:47:40 +0000 (0:00:02.233) 0:00:25.423 ***** 2026-02-05 00:49:37.578359 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-05 00:49:37.578363 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-05 00:49:37.578366 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-05 00:49:37.578370 | orchestrator | 2026-02-05 00:49:37.578374 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-05 00:49:37.578378 | orchestrator | Thursday 05 February 2026 00:47:42 +0000 (0:00:01.962) 0:00:27.386 ***** 2026-02-05 00:49:37.578382 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-05 00:49:37.578385 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-05 00:49:37.578389 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-05 00:49:37.578393 | orchestrator | 2026-02-05 00:49:37.578397 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-05 00:49:37.578400 | orchestrator | Thursday 05 February 2026 00:47:43 +0000 (0:00:01.482) 0:00:28.869 ***** 2026-02-05 00:49:37.578404 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:37.578408 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:37.578412 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:37.578415 | orchestrator | 2026-02-05 00:49:37.578419 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-05 00:49:37.578423 | orchestrator | Thursday 05 February 2026 00:47:44 +0000 (0:00:00.510) 0:00:29.380 ***** 2026-02-05 00:49:37.578427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:49:37.578446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:49:37.578451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:49:37.578455 | orchestrator | 2026-02-05 00:49:37.578459 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-05 00:49:37.578463 | orchestrator | Thursday 05 February 2026 00:47:46 +0000 (0:00:01.954) 0:00:31.335 ***** 2026-02-05 00:49:37.578467 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:37.578471 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:37.578474 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:37.578478 | orchestrator | 2026-02-05 00:49:37.578482 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-05 00:49:37.578486 | orchestrator | Thursday 05 February 2026 00:47:47 +0000 (0:00:01.065) 0:00:32.401 ***** 2026-02-05 00:49:37.578489 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:37.578493 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:37.578497 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:37.578501 | orchestrator | 2026-02-05 00:49:37.578504 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-05 00:49:37.578508 | orchestrator | Thursday 05 February 2026 00:47:56 +0000 (0:00:08.810) 0:00:41.211 ***** 2026-02-05 00:49:37.578515 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:37.578519 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:37.578523 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:37.578527 | orchestrator | 2026-02-05 00:49:37.578530 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-05 00:49:37.578534 | orchestrator | 2026-02-05 00:49:37.578538 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-05 00:49:37.578542 | orchestrator | Thursday 05 February 2026 00:47:57 +0000 (0:00:01.075) 0:00:42.287 ***** 2026-02-05 00:49:37.578546 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:37.578549 | orchestrator | 2026-02-05 00:49:37.578553 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-05 00:49:37.578557 | orchestrator | Thursday 05 February 2026 00:47:57 +0000 (0:00:00.760) 0:00:43.047 ***** 2026-02-05 00:49:37.578561 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:37.578564 | orchestrator | 2026-02-05 00:49:37.578568 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-05 00:49:37.578572 | orchestrator | Thursday 05 February 2026 00:47:58 +0000 (0:00:00.264) 0:00:43.312 ***** 2026-02-05 00:49:37.578576 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:37.578579 | orchestrator | 2026-02-05 00:49:37.578583 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-05 00:49:37.578587 | orchestrator | Thursday 05 February 2026 00:47:59 +0000 (0:00:01.532) 0:00:44.845 ***** 2026-02-05 00:49:37.578590 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:37.578594 | orchestrator | 2026-02-05 00:49:37.578598 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-05 00:49:37.578602 | orchestrator | 2026-02-05 00:49:37.578605 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-05 00:49:37.578609 | orchestrator | Thursday 05 February 2026 00:48:52 +0000 (0:00:52.988) 0:01:37.833 ***** 2026-02-05 00:49:37.578613 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:37.578617 | orchestrator | 2026-02-05 00:49:37.578620 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-05 00:49:37.578624 | orchestrator | Thursday 05 February 2026 00:48:53 +0000 (0:00:00.611) 0:01:38.444 ***** 2026-02-05 00:49:37.578628 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:37.578631 | orchestrator | 2026-02-05 00:49:37.578635 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-05 00:49:37.578639 | orchestrator | Thursday 05 February 2026 00:48:53 +0000 (0:00:00.205) 0:01:38.649 ***** 2026-02-05 00:49:37.578643 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:37.578646 | orchestrator | 2026-02-05 00:49:37.578650 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-05 00:49:37.578654 | orchestrator | Thursday 05 February 2026 00:49:00 +0000 (0:00:06.972) 0:01:45.622 ***** 2026-02-05 00:49:37.578658 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:37.578661 | orchestrator | 2026-02-05 00:49:37.578665 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-05 00:49:37.578669 | orchestrator | 2026-02-05 00:49:37.578673 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-05 00:49:37.578681 | orchestrator | Thursday 05 February 2026 00:49:12 +0000 (0:00:11.566) 0:01:57.189 ***** 2026-02-05 00:49:37.578685 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:37.578689 | orchestrator | 2026-02-05 00:49:37.578692 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-05 00:49:37.578696 | orchestrator | Thursday 05 February 2026 00:49:12 +0000 (0:00:00.610) 0:01:57.799 ***** 2026-02-05 00:49:37.578700 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:37.578703 | orchestrator | 2026-02-05 00:49:37.578707 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-05 00:49:37.578711 | orchestrator | Thursday 05 February 2026 00:49:12 +0000 (0:00:00.192) 0:01:57.992 ***** 2026-02-05 00:49:37.578715 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:37.578722 | orchestrator | 2026-02-05 00:49:37.578728 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-05 00:49:37.578734 | orchestrator | Thursday 05 February 2026 00:49:14 +0000 (0:00:01.658) 0:01:59.651 ***** 2026-02-05 00:49:37.578740 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:37.578747 | orchestrator | 2026-02-05 00:49:37.578753 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-05 00:49:37.578758 | orchestrator | 2026-02-05 00:49:37.578764 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-05 00:49:37.578770 | orchestrator | Thursday 05 February 2026 00:49:31 +0000 (0:00:17.005) 0:02:16.656 ***** 2026-02-05 00:49:37.578775 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:49:37.578780 | orchestrator | 2026-02-05 00:49:37.578786 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-05 00:49:37.578792 | orchestrator | Thursday 05 February 2026 00:49:32 +0000 (0:00:00.442) 0:02:17.099 ***** 2026-02-05 00:49:37.578798 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-05 00:49:37.578803 | orchestrator | enable_outward_rabbitmq_True 2026-02-05 00:49:37.578809 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-05 00:49:37.578815 | orchestrator | outward_rabbitmq_restart 2026-02-05 00:49:37.578821 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:37.578827 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:37.578833 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:37.578839 | orchestrator | 2026-02-05 00:49:37.578844 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-05 00:49:37.578850 | orchestrator | skipping: no hosts matched 2026-02-05 00:49:37.578856 | orchestrator | 2026-02-05 00:49:37.578862 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-05 00:49:37.578868 | orchestrator | skipping: no hosts matched 2026-02-05 00:49:37.578874 | orchestrator | 2026-02-05 00:49:37.578880 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-05 00:49:37.578886 | orchestrator | skipping: no hosts matched 2026-02-05 00:49:37.578892 | orchestrator | 2026-02-05 00:49:37.578898 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:49:37.578905 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-05 00:49:37.578912 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-05 00:49:37.578917 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:49:37.578924 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:49:37.578930 | orchestrator | 2026-02-05 00:49:37.578936 | orchestrator | 2026-02-05 00:49:37.578942 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:49:37.578948 | orchestrator | Thursday 05 February 2026 00:49:34 +0000 (0:00:02.730) 0:02:19.829 ***** 2026-02-05 00:49:37.578955 | orchestrator | =============================================================================== 2026-02-05 00:49:37.578961 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.56s 2026-02-05 00:49:37.578967 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.16s 2026-02-05 00:49:37.578972 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.81s 2026-02-05 00:49:37.578976 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.47s 2026-02-05 00:49:37.578980 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.28s 2026-02-05 00:49:37.578988 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.73s 2026-02-05 00:49:37.578992 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.43s 2026-02-05 00:49:37.578995 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.31s 2026-02-05 00:49:37.578999 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.23s 2026-02-05 00:49:37.579003 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.98s 2026-02-05 00:49:37.579006 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.96s 2026-02-05 00:49:37.579010 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.95s 2026-02-05 00:49:37.579014 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.79s 2026-02-05 00:49:37.579017 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.48s 2026-02-05 00:49:37.579021 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.45s 2026-02-05 00:49:37.579028 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.13s 2026-02-05 00:49:37.579035 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.08s 2026-02-05 00:49:37.579039 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 1.08s 2026-02-05 00:49:37.579042 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.07s 2026-02-05 00:49:37.579046 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.96s 2026-02-05 00:49:37.579050 | orchestrator | 2026-02-05 00:49:37 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:37.579054 | orchestrator | 2026-02-05 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:40.620210 | orchestrator | 2026-02-05 00:49:40 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:40.621466 | orchestrator | 2026-02-05 00:49:40 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:40.622846 | orchestrator | 2026-02-05 00:49:40 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:40.622888 | orchestrator | 2026-02-05 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:43.663387 | orchestrator | 2026-02-05 00:49:43 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:43.666938 | orchestrator | 2026-02-05 00:49:43 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:43.667016 | orchestrator | 2026-02-05 00:49:43 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:43.667027 | orchestrator | 2026-02-05 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:46.709545 | orchestrator | 2026-02-05 00:49:46 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:46.711267 | orchestrator | 2026-02-05 00:49:46 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:46.713693 | orchestrator | 2026-02-05 00:49:46 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:46.714066 | orchestrator | 2026-02-05 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:49.747080 | orchestrator | 2026-02-05 00:49:49 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:49.747163 | orchestrator | 2026-02-05 00:49:49 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:49.747572 | orchestrator | 2026-02-05 00:49:49 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:49.747633 | orchestrator | 2026-02-05 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:52.779404 | orchestrator | 2026-02-05 00:49:52 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:52.781445 | orchestrator | 2026-02-05 00:49:52 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:52.781508 | orchestrator | 2026-02-05 00:49:52 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:52.781520 | orchestrator | 2026-02-05 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:55.813455 | orchestrator | 2026-02-05 00:49:55 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:55.816598 | orchestrator | 2026-02-05 00:49:55 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:55.819414 | orchestrator | 2026-02-05 00:49:55 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:55.819517 | orchestrator | 2026-02-05 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:58.858906 | orchestrator | 2026-02-05 00:49:58 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:49:58.860908 | orchestrator | 2026-02-05 00:49:58 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:49:58.863457 | orchestrator | 2026-02-05 00:49:58 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:49:58.864181 | orchestrator | 2026-02-05 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:01.907366 | orchestrator | 2026-02-05 00:50:01 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:01.908463 | orchestrator | 2026-02-05 00:50:01 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:50:01.908507 | orchestrator | 2026-02-05 00:50:01 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:01.908531 | orchestrator | 2026-02-05 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:04.948334 | orchestrator | 2026-02-05 00:50:04 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:04.950630 | orchestrator | 2026-02-05 00:50:04 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:50:04.952651 | orchestrator | 2026-02-05 00:50:04 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:04.952718 | orchestrator | 2026-02-05 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:07.990630 | orchestrator | 2026-02-05 00:50:07 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:07.992572 | orchestrator | 2026-02-05 00:50:07 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:50:07.994135 | orchestrator | 2026-02-05 00:50:07 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:07.994197 | orchestrator | 2026-02-05 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:11.026751 | orchestrator | 2026-02-05 00:50:11 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:11.027170 | orchestrator | 2026-02-05 00:50:11 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:50:11.027889 | orchestrator | 2026-02-05 00:50:11 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:11.027922 | orchestrator | 2026-02-05 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:14.060898 | orchestrator | 2026-02-05 00:50:14 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:14.062447 | orchestrator | 2026-02-05 00:50:14 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:50:14.065755 | orchestrator | 2026-02-05 00:50:14 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:14.065828 | orchestrator | 2026-02-05 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:17.104759 | orchestrator | 2026-02-05 00:50:17 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:17.108056 | orchestrator | 2026-02-05 00:50:17 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:50:17.108285 | orchestrator | 2026-02-05 00:50:17 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:17.108313 | orchestrator | 2026-02-05 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:20.144032 | orchestrator | 2026-02-05 00:50:20 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:20.144435 | orchestrator | 2026-02-05 00:50:20 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state STARTED 2026-02-05 00:50:20.145060 | orchestrator | 2026-02-05 00:50:20 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:20.145078 | orchestrator | 2026-02-05 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:23.179609 | orchestrator | 2026-02-05 00:50:23 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:23.180385 | orchestrator | 2026-02-05 00:50:23.180416 | orchestrator | 2026-02-05 00:50:23 | INFO  | Task 84504d80-5451-42fe-b2aa-cb61e443e918 is in state SUCCESS 2026-02-05 00:50:23.181641 | orchestrator | 2026-02-05 00:50:23.181677 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:50:23.181686 | orchestrator | 2026-02-05 00:50:23.181693 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:50:23.181699 | orchestrator | Thursday 05 February 2026 00:48:08 +0000 (0:00:00.202) 0:00:00.202 ***** 2026-02-05 00:50:23.181706 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:50:23.181713 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:50:23.181719 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:50:23.181725 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.181732 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.181738 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.181744 | orchestrator | 2026-02-05 00:50:23.181750 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:50:23.181756 | orchestrator | Thursday 05 February 2026 00:48:10 +0000 (0:00:01.578) 0:00:01.781 ***** 2026-02-05 00:50:23.181762 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-05 00:50:23.181769 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-05 00:50:23.181775 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-05 00:50:23.181782 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-05 00:50:23.181789 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-05 00:50:23.181796 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-05 00:50:23.181802 | orchestrator | 2026-02-05 00:50:23.181809 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-05 00:50:23.181814 | orchestrator | 2026-02-05 00:50:23.181859 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-05 00:50:23.181877 | orchestrator | Thursday 05 February 2026 00:48:11 +0000 (0:00:00.968) 0:00:02.749 ***** 2026-02-05 00:50:23.181885 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:50:23.181925 | orchestrator | 2026-02-05 00:50:23.181962 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-05 00:50:23.181977 | orchestrator | Thursday 05 February 2026 00:48:12 +0000 (0:00:01.255) 0:00:04.004 ***** 2026-02-05 00:50:23.181985 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.181993 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.181999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182065 | orchestrator | 2026-02-05 00:50:23.182071 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-05 00:50:23.182078 | orchestrator | Thursday 05 February 2026 00:48:14 +0000 (0:00:01.503) 0:00:05.508 ***** 2026-02-05 00:50:23.182085 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182097 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182138 | orchestrator | 2026-02-05 00:50:23.182145 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-05 00:50:23.182152 | orchestrator | Thursday 05 February 2026 00:48:15 +0000 (0:00:01.522) 0:00:07.030 ***** 2026-02-05 00:50:23.182159 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182234 | orchestrator | 2026-02-05 00:50:23.182238 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-05 00:50:23.182242 | orchestrator | Thursday 05 February 2026 00:48:16 +0000 (0:00:01.089) 0:00:08.120 ***** 2026-02-05 00:50:23.182246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182256 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182282 | orchestrator | 2026-02-05 00:50:23.182286 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-05 00:50:23.182290 | orchestrator | Thursday 05 February 2026 00:48:18 +0000 (0:00:01.569) 0:00:09.689 ***** 2026-02-05 00:50:23.182294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.182319 | orchestrator | 2026-02-05 00:50:23.182323 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-05 00:50:23.182327 | orchestrator | Thursday 05 February 2026 00:48:19 +0000 (0:00:01.456) 0:00:11.146 ***** 2026-02-05 00:50:23.182330 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:50:23.182334 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:50:23.182338 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:50:23.182342 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:23.182346 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:23.182349 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:23.182353 | orchestrator | 2026-02-05 00:50:23.182357 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-05 00:50:23.182360 | orchestrator | Thursday 05 February 2026 00:48:22 +0000 (0:00:02.569) 0:00:13.716 ***** 2026-02-05 00:50:23.182369 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-05 00:50:23.182373 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-05 00:50:23.182377 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-05 00:50:23.182382 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-05 00:50:23.182386 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-05 00:50:23.182390 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-05 00:50:23.182394 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 00:50:23.182398 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 00:50:23.182406 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 00:50:23.182410 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 00:50:23.182414 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 00:50:23.182417 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 00:50:23.182421 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 00:50:23.182427 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 00:50:23.182432 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 00:50:23.182435 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 00:50:23.182439 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 00:50:23.182443 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 00:50:23.182447 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 00:50:23.182451 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 00:50:23.182455 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 00:50:23.182458 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 00:50:23.182462 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 00:50:23.182466 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 00:50:23.182470 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 00:50:23.182473 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 00:50:23.182477 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 00:50:23.182481 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 00:50:23.182484 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 00:50:23.182488 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 00:50:23.182495 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 00:50:23.182499 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 00:50:23.182503 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 00:50:23.182506 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 00:50:23.182510 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-05 00:50:23.182514 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 00:50:23.182518 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 00:50:23.182522 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-05 00:50:23.182525 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-05 00:50:23.182529 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-05 00:50:23.182535 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-05 00:50:23.182539 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-05 00:50:23.182543 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-05 00:50:23.182547 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-05 00:50:23.182551 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-05 00:50:23.182555 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-05 00:50:23.182559 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-05 00:50:23.182565 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-05 00:50:23.182575 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-05 00:50:23.182585 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-05 00:50:23.182591 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-05 00:50:23.182598 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-05 00:50:23.182605 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-05 00:50:23.182612 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-05 00:50:23.182619 | orchestrator | 2026-02-05 00:50:23.182626 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 00:50:23.182633 | orchestrator | Thursday 05 February 2026 00:48:41 +0000 (0:00:19.110) 0:00:32.827 ***** 2026-02-05 00:50:23.182640 | orchestrator | 2026-02-05 00:50:23.182647 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 00:50:23.182654 | orchestrator | Thursday 05 February 2026 00:48:41 +0000 (0:00:00.056) 0:00:32.883 ***** 2026-02-05 00:50:23.182666 | orchestrator | 2026-02-05 00:50:23.182671 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 00:50:23.182675 | orchestrator | Thursday 05 February 2026 00:48:41 +0000 (0:00:00.064) 0:00:32.948 ***** 2026-02-05 00:50:23.182679 | orchestrator | 2026-02-05 00:50:23.182683 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 00:50:23.182687 | orchestrator | Thursday 05 February 2026 00:48:41 +0000 (0:00:00.062) 0:00:33.011 ***** 2026-02-05 00:50:23.182690 | orchestrator | 2026-02-05 00:50:23.182694 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 00:50:23.182698 | orchestrator | Thursday 05 February 2026 00:48:41 +0000 (0:00:00.059) 0:00:33.070 ***** 2026-02-05 00:50:23.182702 | orchestrator | 2026-02-05 00:50:23.182705 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 00:50:23.182709 | orchestrator | Thursday 05 February 2026 00:48:41 +0000 (0:00:00.058) 0:00:33.128 ***** 2026-02-05 00:50:23.182713 | orchestrator | 2026-02-05 00:50:23.182717 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-05 00:50:23.182720 | orchestrator | Thursday 05 February 2026 00:48:41 +0000 (0:00:00.062) 0:00:33.191 ***** 2026-02-05 00:50:23.182724 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:50:23.182728 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.182732 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:50:23.182736 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:50:23.182739 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.182743 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.182747 | orchestrator | 2026-02-05 00:50:23.182750 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-05 00:50:23.182754 | orchestrator | Thursday 05 February 2026 00:48:43 +0000 (0:00:01.438) 0:00:34.630 ***** 2026-02-05 00:50:23.182758 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:23.182762 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:50:23.182765 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:50:23.182769 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:50:23.182773 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:23.182776 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:23.182780 | orchestrator | 2026-02-05 00:50:23.182784 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-05 00:50:23.182788 | orchestrator | 2026-02-05 00:50:23.182791 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-05 00:50:23.182795 | orchestrator | Thursday 05 February 2026 00:49:14 +0000 (0:00:30.846) 0:01:05.477 ***** 2026-02-05 00:50:23.182799 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:50:23.182803 | orchestrator | 2026-02-05 00:50:23.182807 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-05 00:50:23.182810 | orchestrator | Thursday 05 February 2026 00:49:15 +0000 (0:00:01.581) 0:01:07.058 ***** 2026-02-05 00:50:23.182814 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:50:23.182818 | orchestrator | 2026-02-05 00:50:23.182825 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-05 00:50:23.182829 | orchestrator | Thursday 05 February 2026 00:49:16 +0000 (0:00:00.659) 0:01:07.718 ***** 2026-02-05 00:50:23.182833 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.182837 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.182841 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.182845 | orchestrator | 2026-02-05 00:50:23.182848 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-05 00:50:23.182852 | orchestrator | Thursday 05 February 2026 00:49:17 +0000 (0:00:01.397) 0:01:09.115 ***** 2026-02-05 00:50:23.182856 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.182860 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.182864 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.182870 | orchestrator | 2026-02-05 00:50:23.182874 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-05 00:50:23.182878 | orchestrator | Thursday 05 February 2026 00:49:18 +0000 (0:00:00.285) 0:01:09.401 ***** 2026-02-05 00:50:23.182882 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.182885 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.182889 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.182893 | orchestrator | 2026-02-05 00:50:23.182897 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-05 00:50:23.182900 | orchestrator | Thursday 05 February 2026 00:49:18 +0000 (0:00:00.293) 0:01:09.694 ***** 2026-02-05 00:50:23.182904 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.182908 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.182912 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.182915 | orchestrator | 2026-02-05 00:50:23.182919 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-05 00:50:23.182923 | orchestrator | Thursday 05 February 2026 00:49:18 +0000 (0:00:00.295) 0:01:09.989 ***** 2026-02-05 00:50:23.182927 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.182931 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.182934 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.182938 | orchestrator | 2026-02-05 00:50:23.182944 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-05 00:50:23.182954 | orchestrator | Thursday 05 February 2026 00:49:19 +0000 (0:00:00.633) 0:01:10.623 ***** 2026-02-05 00:50:23.182961 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.182968 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.182973 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.182980 | orchestrator | 2026-02-05 00:50:23.182986 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-05 00:50:23.182992 | orchestrator | Thursday 05 February 2026 00:49:19 +0000 (0:00:00.253) 0:01:10.876 ***** 2026-02-05 00:50:23.182999 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183003 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183007 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183010 | orchestrator | 2026-02-05 00:50:23.183014 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-05 00:50:23.183018 | orchestrator | Thursday 05 February 2026 00:49:19 +0000 (0:00:00.269) 0:01:11.146 ***** 2026-02-05 00:50:23.183022 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183025 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183029 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183033 | orchestrator | 2026-02-05 00:50:23.183037 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-05 00:50:23.183043 | orchestrator | Thursday 05 February 2026 00:49:20 +0000 (0:00:00.354) 0:01:11.501 ***** 2026-02-05 00:50:23.183050 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183056 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183062 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183068 | orchestrator | 2026-02-05 00:50:23.183075 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-05 00:50:23.183081 | orchestrator | Thursday 05 February 2026 00:49:20 +0000 (0:00:00.416) 0:01:11.918 ***** 2026-02-05 00:50:23.183087 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183094 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183100 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183107 | orchestrator | 2026-02-05 00:50:23.183111 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-05 00:50:23.183115 | orchestrator | Thursday 05 February 2026 00:49:20 +0000 (0:00:00.254) 0:01:12.172 ***** 2026-02-05 00:50:23.183119 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183122 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183126 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183130 | orchestrator | 2026-02-05 00:50:23.183134 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-05 00:50:23.183144 | orchestrator | Thursday 05 February 2026 00:49:21 +0000 (0:00:00.262) 0:01:12.435 ***** 2026-02-05 00:50:23.183148 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183152 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183156 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183160 | orchestrator | 2026-02-05 00:50:23.183163 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-05 00:50:23.183167 | orchestrator | Thursday 05 February 2026 00:49:21 +0000 (0:00:00.284) 0:01:12.719 ***** 2026-02-05 00:50:23.183171 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183175 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183178 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183182 | orchestrator | 2026-02-05 00:50:23.183221 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-05 00:50:23.183225 | orchestrator | Thursday 05 February 2026 00:49:21 +0000 (0:00:00.290) 0:01:13.010 ***** 2026-02-05 00:50:23.183229 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183233 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183237 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183241 | orchestrator | 2026-02-05 00:50:23.183245 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-05 00:50:23.183248 | orchestrator | Thursday 05 February 2026 00:49:22 +0000 (0:00:00.397) 0:01:13.407 ***** 2026-02-05 00:50:23.183252 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183256 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183260 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183264 | orchestrator | 2026-02-05 00:50:23.183271 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-05 00:50:23.183275 | orchestrator | Thursday 05 February 2026 00:49:22 +0000 (0:00:00.253) 0:01:13.661 ***** 2026-02-05 00:50:23.183279 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183283 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183287 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183291 | orchestrator | 2026-02-05 00:50:23.183295 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-05 00:50:23.183298 | orchestrator | Thursday 05 February 2026 00:49:22 +0000 (0:00:00.273) 0:01:13.934 ***** 2026-02-05 00:50:23.183302 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183306 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183310 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183314 | orchestrator | 2026-02-05 00:50:23.183318 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-05 00:50:23.183337 | orchestrator | Thursday 05 February 2026 00:49:22 +0000 (0:00:00.258) 0:01:14.193 ***** 2026-02-05 00:50:23.183342 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:50:23.183346 | orchestrator | 2026-02-05 00:50:23.183350 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-05 00:50:23.183354 | orchestrator | Thursday 05 February 2026 00:49:23 +0000 (0:00:00.665) 0:01:14.858 ***** 2026-02-05 00:50:23.183357 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.183361 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.183367 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.183371 | orchestrator | 2026-02-05 00:50:23.183375 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-05 00:50:23.183379 | orchestrator | Thursday 05 February 2026 00:49:23 +0000 (0:00:00.426) 0:01:15.285 ***** 2026-02-05 00:50:23.183382 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.183386 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.183390 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.183394 | orchestrator | 2026-02-05 00:50:23.183398 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-05 00:50:23.183401 | orchestrator | Thursday 05 February 2026 00:49:24 +0000 (0:00:00.416) 0:01:15.701 ***** 2026-02-05 00:50:23.183409 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183413 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183416 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183420 | orchestrator | 2026-02-05 00:50:23.183424 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-05 00:50:23.183428 | orchestrator | Thursday 05 February 2026 00:49:24 +0000 (0:00:00.523) 0:01:16.225 ***** 2026-02-05 00:50:23.183432 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183435 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183439 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183443 | orchestrator | 2026-02-05 00:50:23.183447 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-05 00:50:23.183450 | orchestrator | Thursday 05 February 2026 00:49:25 +0000 (0:00:00.424) 0:01:16.650 ***** 2026-02-05 00:50:23.183454 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183458 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183464 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183471 | orchestrator | 2026-02-05 00:50:23.183476 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-05 00:50:23.183482 | orchestrator | Thursday 05 February 2026 00:49:25 +0000 (0:00:00.386) 0:01:17.036 ***** 2026-02-05 00:50:23.183488 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183494 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183501 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183507 | orchestrator | 2026-02-05 00:50:23.183513 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-05 00:50:23.183520 | orchestrator | Thursday 05 February 2026 00:49:26 +0000 (0:00:00.395) 0:01:17.432 ***** 2026-02-05 00:50:23.183527 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183532 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183536 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183540 | orchestrator | 2026-02-05 00:50:23.183544 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-05 00:50:23.183548 | orchestrator | Thursday 05 February 2026 00:49:26 +0000 (0:00:00.313) 0:01:17.745 ***** 2026-02-05 00:50:23.183551 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183555 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183559 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183563 | orchestrator | 2026-02-05 00:50:23.183566 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-05 00:50:23.183570 | orchestrator | Thursday 05 February 2026 00:49:26 +0000 (0:00:00.436) 0:01:18.181 ***** 2026-02-05 00:50:23.183575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183625 | orchestrator | 2026-02-05 00:50:23.183628 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-05 00:50:23.183632 | orchestrator | Thursday 05 February 2026 00:49:28 +0000 (0:00:01.419) 0:01:19.601 ***** 2026-02-05 00:50:23.183636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183681 | orchestrator | 2026-02-05 00:50:23.183685 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-05 00:50:23.183689 | orchestrator | Thursday 05 February 2026 00:49:32 +0000 (0:00:03.846) 0:01:23.447 ***** 2026-02-05 00:50:23.183693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.183760 | orchestrator | 2026-02-05 00:50:23.183774 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 00:50:23.183779 | orchestrator | Thursday 05 February 2026 00:49:34 +0000 (0:00:02.031) 0:01:25.479 ***** 2026-02-05 00:50:23.183787 | orchestrator | 2026-02-05 00:50:23.183791 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 00:50:23.183795 | orchestrator | Thursday 05 February 2026 00:49:34 +0000 (0:00:00.226) 0:01:25.706 ***** 2026-02-05 00:50:23.183799 | orchestrator | 2026-02-05 00:50:23.183803 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 00:50:23.183807 | orchestrator | Thursday 05 February 2026 00:49:34 +0000 (0:00:00.066) 0:01:25.772 ***** 2026-02-05 00:50:23.183811 | orchestrator | 2026-02-05 00:50:23.183814 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-05 00:50:23.183819 | orchestrator | Thursday 05 February 2026 00:49:34 +0000 (0:00:00.095) 0:01:25.868 ***** 2026-02-05 00:50:23.183826 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:23.183833 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:23.183839 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:23.183845 | orchestrator | 2026-02-05 00:50:23.183851 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-05 00:50:23.183861 | orchestrator | Thursday 05 February 2026 00:49:37 +0000 (0:00:02.808) 0:01:28.677 ***** 2026-02-05 00:50:23.183868 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:23.183874 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:23.183881 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:23.183887 | orchestrator | 2026-02-05 00:50:23.183894 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-05 00:50:23.183901 | orchestrator | Thursday 05 February 2026 00:49:39 +0000 (0:00:02.607) 0:01:31.284 ***** 2026-02-05 00:50:23.183905 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:23.183909 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:23.183913 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:23.183917 | orchestrator | 2026-02-05 00:50:23.183921 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-05 00:50:23.183924 | orchestrator | Thursday 05 February 2026 00:49:42 +0000 (0:00:02.577) 0:01:33.862 ***** 2026-02-05 00:50:23.183928 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.183932 | orchestrator | 2026-02-05 00:50:23.183936 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-05 00:50:23.183940 | orchestrator | Thursday 05 February 2026 00:49:42 +0000 (0:00:00.121) 0:01:33.984 ***** 2026-02-05 00:50:23.183944 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.183948 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.183951 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.183955 | orchestrator | 2026-02-05 00:50:23.183963 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-05 00:50:23.183968 | orchestrator | Thursday 05 February 2026 00:49:43 +0000 (0:00:00.900) 0:01:34.884 ***** 2026-02-05 00:50:23.183972 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.183976 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.183979 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:23.183983 | orchestrator | 2026-02-05 00:50:23.183987 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-05 00:50:23.183991 | orchestrator | Thursday 05 February 2026 00:49:44 +0000 (0:00:00.682) 0:01:35.566 ***** 2026-02-05 00:50:23.183995 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.183998 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.184003 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.184006 | orchestrator | 2026-02-05 00:50:23.184011 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-05 00:50:23.184015 | orchestrator | Thursday 05 February 2026 00:49:45 +0000 (0:00:00.770) 0:01:36.337 ***** 2026-02-05 00:50:23.184018 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.184022 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.184026 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:23.184030 | orchestrator | 2026-02-05 00:50:23.184034 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-05 00:50:23.184037 | orchestrator | Thursday 05 February 2026 00:49:45 +0000 (0:00:00.741) 0:01:37.079 ***** 2026-02-05 00:50:23.184041 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.184045 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.184050 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.184054 | orchestrator | 2026-02-05 00:50:23.184060 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-05 00:50:23.184064 | orchestrator | Thursday 05 February 2026 00:49:46 +0000 (0:00:01.050) 0:01:38.129 ***** 2026-02-05 00:50:23.184068 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.184072 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.184076 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.184080 | orchestrator | 2026-02-05 00:50:23.184084 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-05 00:50:23.184088 | orchestrator | Thursday 05 February 2026 00:49:47 +0000 (0:00:00.684) 0:01:38.814 ***** 2026-02-05 00:50:23.184092 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.184099 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.184102 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.184106 | orchestrator | 2026-02-05 00:50:23.184110 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-05 00:50:23.184114 | orchestrator | Thursday 05 February 2026 00:49:47 +0000 (0:00:00.246) 0:01:39.060 ***** 2026-02-05 00:50:23.184118 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184122 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184126 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184130 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184134 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184138 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184145 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184149 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184155 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184162 | orchestrator | 2026-02-05 00:50:23.184166 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-05 00:50:23.184170 | orchestrator | Thursday 05 February 2026 00:49:49 +0000 (0:00:01.356) 0:01:40.417 ***** 2026-02-05 00:50:23.184174 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184178 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184182 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184200 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184220 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184233 | orchestrator | 2026-02-05 00:50:23.184237 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-05 00:50:23.184243 | orchestrator | Thursday 05 February 2026 00:49:53 +0000 (0:00:03.999) 0:01:44.416 ***** 2026-02-05 00:50:23.184247 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184251 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184255 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184267 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184282 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:50:23.184291 | orchestrator | 2026-02-05 00:50:23.184295 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 00:50:23.184299 | orchestrator | Thursday 05 February 2026 00:49:56 +0000 (0:00:03.199) 0:01:47.616 ***** 2026-02-05 00:50:23.184302 | orchestrator | 2026-02-05 00:50:23.184306 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 00:50:23.184310 | orchestrator | Thursday 05 February 2026 00:49:56 +0000 (0:00:00.063) 0:01:47.680 ***** 2026-02-05 00:50:23.184314 | orchestrator | 2026-02-05 00:50:23.184320 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 00:50:23.184326 | orchestrator | Thursday 05 February 2026 00:49:56 +0000 (0:00:00.065) 0:01:47.746 ***** 2026-02-05 00:50:23.184332 | orchestrator | 2026-02-05 00:50:23.184338 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-05 00:50:23.184344 | orchestrator | Thursday 05 February 2026 00:49:56 +0000 (0:00:00.073) 0:01:47.819 ***** 2026-02-05 00:50:23.184350 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:23.184356 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:23.184360 | orchestrator | 2026-02-05 00:50:23.184364 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-05 00:50:23.184368 | orchestrator | Thursday 05 February 2026 00:50:02 +0000 (0:00:06.208) 0:01:54.027 ***** 2026-02-05 00:50:23.184371 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:23.184375 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:23.184379 | orchestrator | 2026-02-05 00:50:23.184383 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-05 00:50:23.184387 | orchestrator | Thursday 05 February 2026 00:50:08 +0000 (0:00:06.128) 0:02:00.155 ***** 2026-02-05 00:50:23.184390 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:23.184394 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:23.184398 | orchestrator | 2026-02-05 00:50:23.184402 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-05 00:50:23.184406 | orchestrator | Thursday 05 February 2026 00:50:15 +0000 (0:00:06.517) 0:02:06.673 ***** 2026-02-05 00:50:23.184410 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:23.184413 | orchestrator | 2026-02-05 00:50:23.184417 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-05 00:50:23.184421 | orchestrator | Thursday 05 February 2026 00:50:15 +0000 (0:00:00.261) 0:02:06.935 ***** 2026-02-05 00:50:23.184425 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.184429 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.184432 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.184436 | orchestrator | 2026-02-05 00:50:23.184440 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-05 00:50:23.184444 | orchestrator | Thursday 05 February 2026 00:50:16 +0000 (0:00:00.756) 0:02:07.692 ***** 2026-02-05 00:50:23.184447 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.184451 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.184455 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:23.184459 | orchestrator | 2026-02-05 00:50:23.184462 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-05 00:50:23.184466 | orchestrator | Thursday 05 February 2026 00:50:16 +0000 (0:00:00.590) 0:02:08.283 ***** 2026-02-05 00:50:23.184470 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.184474 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.184478 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.184481 | orchestrator | 2026-02-05 00:50:23.184485 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-05 00:50:23.184489 | orchestrator | Thursday 05 February 2026 00:50:17 +0000 (0:00:00.747) 0:02:09.030 ***** 2026-02-05 00:50:23.184496 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:23.184500 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:23.184504 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:23.184508 | orchestrator | 2026-02-05 00:50:23.184511 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-05 00:50:23.184515 | orchestrator | Thursday 05 February 2026 00:50:18 +0000 (0:00:00.722) 0:02:09.752 ***** 2026-02-05 00:50:23.184519 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.184523 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.184527 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.184530 | orchestrator | 2026-02-05 00:50:23.184534 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-05 00:50:23.184538 | orchestrator | Thursday 05 February 2026 00:50:19 +0000 (0:00:00.781) 0:02:10.534 ***** 2026-02-05 00:50:23.184542 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:23.184545 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:23.184549 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:23.184553 | orchestrator | 2026-02-05 00:50:23.184557 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:50:23.184561 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-05 00:50:23.184566 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-05 00:50:23.184572 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-05 00:50:23.184576 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:50:23.184580 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:50:23.184584 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:50:23.184588 | orchestrator | 2026-02-05 00:50:23.184592 | orchestrator | 2026-02-05 00:50:23.184596 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:50:23.184600 | orchestrator | Thursday 05 February 2026 00:50:20 +0000 (0:00:00.916) 0:02:11.451 ***** 2026-02-05 00:50:23.184604 | orchestrator | =============================================================================== 2026-02-05 00:50:23.184615 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 30.85s 2026-02-05 00:50:23.184619 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.11s 2026-02-05 00:50:23.184625 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.10s 2026-02-05 00:50:23.184629 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.02s 2026-02-05 00:50:23.184633 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.74s 2026-02-05 00:50:23.184638 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.00s 2026-02-05 00:50:23.184641 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.85s 2026-02-05 00:50:23.184645 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.20s 2026-02-05 00:50:23.184649 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.57s 2026-02-05 00:50:23.184653 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.03s 2026-02-05 00:50:23.184657 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.58s 2026-02-05 00:50:23.184661 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.58s 2026-02-05 00:50:23.184666 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.57s 2026-02-05 00:50:23.184679 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.52s 2026-02-05 00:50:23.184686 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.50s 2026-02-05 00:50:23.184693 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.46s 2026-02-05 00:50:23.184699 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.44s 2026-02-05 00:50:23.184706 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.42s 2026-02-05 00:50:23.184712 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.40s 2026-02-05 00:50:23.184719 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.36s 2026-02-05 00:50:23.184726 | orchestrator | 2026-02-05 00:50:23 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:23.184733 | orchestrator | 2026-02-05 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:26.219893 | orchestrator | 2026-02-05 00:50:26 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:26.221996 | orchestrator | 2026-02-05 00:50:26 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:26.222261 | orchestrator | 2026-02-05 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:29.258269 | orchestrator | 2026-02-05 00:50:29 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:29.258984 | orchestrator | 2026-02-05 00:50:29 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:29.259031 | orchestrator | 2026-02-05 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:32.297890 | orchestrator | 2026-02-05 00:50:32 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:32.300628 | orchestrator | 2026-02-05 00:50:32 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:32.300916 | orchestrator | 2026-02-05 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:35.347659 | orchestrator | 2026-02-05 00:50:35 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:35.347848 | orchestrator | 2026-02-05 00:50:35 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:35.347862 | orchestrator | 2026-02-05 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:38.390344 | orchestrator | 2026-02-05 00:50:38 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:38.392425 | orchestrator | 2026-02-05 00:50:38 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:38.392480 | orchestrator | 2026-02-05 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:41.428877 | orchestrator | 2026-02-05 00:50:41 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:41.431980 | orchestrator | 2026-02-05 00:50:41 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:41.432363 | orchestrator | 2026-02-05 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:44.465424 | orchestrator | 2026-02-05 00:50:44 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:44.466635 | orchestrator | 2026-02-05 00:50:44 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:44.466666 | orchestrator | 2026-02-05 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:47.506858 | orchestrator | 2026-02-05 00:50:47 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:47.509272 | orchestrator | 2026-02-05 00:50:47 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:47.509312 | orchestrator | 2026-02-05 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:50.544967 | orchestrator | 2026-02-05 00:50:50 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:50.546847 | orchestrator | 2026-02-05 00:50:50 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:50.546907 | orchestrator | 2026-02-05 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:53.593550 | orchestrator | 2026-02-05 00:50:53 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:53.595213 | orchestrator | 2026-02-05 00:50:53 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:53.595254 | orchestrator | 2026-02-05 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:56.648915 | orchestrator | 2026-02-05 00:50:56 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:56.651227 | orchestrator | 2026-02-05 00:50:56 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:56.651281 | orchestrator | 2026-02-05 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:59.677213 | orchestrator | 2026-02-05 00:50:59 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:50:59.677962 | orchestrator | 2026-02-05 00:50:59 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:50:59.678008 | orchestrator | 2026-02-05 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:02.719752 | orchestrator | 2026-02-05 00:51:02 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:02.721601 | orchestrator | 2026-02-05 00:51:02 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:02.721650 | orchestrator | 2026-02-05 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:05.751305 | orchestrator | 2026-02-05 00:51:05 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:05.751889 | orchestrator | 2026-02-05 00:51:05 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:05.752014 | orchestrator | 2026-02-05 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:08.786840 | orchestrator | 2026-02-05 00:51:08 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:08.787916 | orchestrator | 2026-02-05 00:51:08 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:08.787966 | orchestrator | 2026-02-05 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:11.818658 | orchestrator | 2026-02-05 00:51:11 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:11.819962 | orchestrator | 2026-02-05 00:51:11 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:11.820032 | orchestrator | 2026-02-05 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:14.847499 | orchestrator | 2026-02-05 00:51:14 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:14.847841 | orchestrator | 2026-02-05 00:51:14 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:14.847875 | orchestrator | 2026-02-05 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:17.882638 | orchestrator | 2026-02-05 00:51:17 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:17.883518 | orchestrator | 2026-02-05 00:51:17 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:17.883562 | orchestrator | 2026-02-05 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:20.917546 | orchestrator | 2026-02-05 00:51:20 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:20.918384 | orchestrator | 2026-02-05 00:51:20 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:20.918416 | orchestrator | 2026-02-05 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:23.955601 | orchestrator | 2026-02-05 00:51:23 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:23.956430 | orchestrator | 2026-02-05 00:51:23 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:23.956508 | orchestrator | 2026-02-05 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:27.002164 | orchestrator | 2026-02-05 00:51:27 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:27.004181 | orchestrator | 2026-02-05 00:51:27 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:27.004255 | orchestrator | 2026-02-05 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:30.061132 | orchestrator | 2026-02-05 00:51:30 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:30.061215 | orchestrator | 2026-02-05 00:51:30 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:30.061222 | orchestrator | 2026-02-05 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:33.100843 | orchestrator | 2026-02-05 00:51:33 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:33.102248 | orchestrator | 2026-02-05 00:51:33 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:33.102295 | orchestrator | 2026-02-05 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:36.138517 | orchestrator | 2026-02-05 00:51:36 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:36.140967 | orchestrator | 2026-02-05 00:51:36 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:36.141401 | orchestrator | 2026-02-05 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:39.180668 | orchestrator | 2026-02-05 00:51:39 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:39.182197 | orchestrator | 2026-02-05 00:51:39 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:39.182289 | orchestrator | 2026-02-05 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:42.230174 | orchestrator | 2026-02-05 00:51:42 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:42.230412 | orchestrator | 2026-02-05 00:51:42 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:42.230473 | orchestrator | 2026-02-05 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:45.276431 | orchestrator | 2026-02-05 00:51:45 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:45.278474 | orchestrator | 2026-02-05 00:51:45 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:45.278564 | orchestrator | 2026-02-05 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:48.308951 | orchestrator | 2026-02-05 00:51:48 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:48.309200 | orchestrator | 2026-02-05 00:51:48 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:48.309221 | orchestrator | 2026-02-05 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:51.354090 | orchestrator | 2026-02-05 00:51:51 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:51.354189 | orchestrator | 2026-02-05 00:51:51 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:51.354199 | orchestrator | 2026-02-05 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:54.401483 | orchestrator | 2026-02-05 00:51:54 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:54.403842 | orchestrator | 2026-02-05 00:51:54 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:54.404402 | orchestrator | 2026-02-05 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:57.442855 | orchestrator | 2026-02-05 00:51:57 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:51:57.443655 | orchestrator | 2026-02-05 00:51:57 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:51:57.443893 | orchestrator | 2026-02-05 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:00.478909 | orchestrator | 2026-02-05 00:52:00 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:00.480823 | orchestrator | 2026-02-05 00:52:00 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:00.481108 | orchestrator | 2026-02-05 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:03.528154 | orchestrator | 2026-02-05 00:52:03 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:03.529714 | orchestrator | 2026-02-05 00:52:03 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:03.529764 | orchestrator | 2026-02-05 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:06.568507 | orchestrator | 2026-02-05 00:52:06 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:06.570525 | orchestrator | 2026-02-05 00:52:06 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:06.570640 | orchestrator | 2026-02-05 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:09.614127 | orchestrator | 2026-02-05 00:52:09 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:09.614467 | orchestrator | 2026-02-05 00:52:09 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:09.614493 | orchestrator | 2026-02-05 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:12.659047 | orchestrator | 2026-02-05 00:52:12 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:12.660771 | orchestrator | 2026-02-05 00:52:12 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:12.660822 | orchestrator | 2026-02-05 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:15.700538 | orchestrator | 2026-02-05 00:52:15 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:15.702069 | orchestrator | 2026-02-05 00:52:15 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:15.702106 | orchestrator | 2026-02-05 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:18.746225 | orchestrator | 2026-02-05 00:52:18 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:18.747585 | orchestrator | 2026-02-05 00:52:18 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:18.747619 | orchestrator | 2026-02-05 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:21.782752 | orchestrator | 2026-02-05 00:52:21 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:21.783630 | orchestrator | 2026-02-05 00:52:21 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:21.783659 | orchestrator | 2026-02-05 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:24.815403 | orchestrator | 2026-02-05 00:52:24 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:24.815533 | orchestrator | 2026-02-05 00:52:24 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:24.815562 | orchestrator | 2026-02-05 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:27.856672 | orchestrator | 2026-02-05 00:52:27 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:27.857265 | orchestrator | 2026-02-05 00:52:27 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:27.857522 | orchestrator | 2026-02-05 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:30.902728 | orchestrator | 2026-02-05 00:52:30 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:30.905381 | orchestrator | 2026-02-05 00:52:30 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:30.906084 | orchestrator | 2026-02-05 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:33.951069 | orchestrator | 2026-02-05 00:52:33 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:33.953463 | orchestrator | 2026-02-05 00:52:33 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:33.953546 | orchestrator | 2026-02-05 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:36.990100 | orchestrator | 2026-02-05 00:52:36 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:36.992724 | orchestrator | 2026-02-05 00:52:36 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:36.992856 | orchestrator | 2026-02-05 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:40.031262 | orchestrator | 2026-02-05 00:52:40 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:40.031396 | orchestrator | 2026-02-05 00:52:40 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:40.031980 | orchestrator | 2026-02-05 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:43.075174 | orchestrator | 2026-02-05 00:52:43 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:43.077015 | orchestrator | 2026-02-05 00:52:43 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:43.077148 | orchestrator | 2026-02-05 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:46.123595 | orchestrator | 2026-02-05 00:52:46 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:46.126582 | orchestrator | 2026-02-05 00:52:46 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:46.127416 | orchestrator | 2026-02-05 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:49.171744 | orchestrator | 2026-02-05 00:52:49 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:49.173423 | orchestrator | 2026-02-05 00:52:49 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:49.173470 | orchestrator | 2026-02-05 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:52.217716 | orchestrator | 2026-02-05 00:52:52 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:52.220014 | orchestrator | 2026-02-05 00:52:52 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:52.220148 | orchestrator | 2026-02-05 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:55.260525 | orchestrator | 2026-02-05 00:52:55 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:55.262907 | orchestrator | 2026-02-05 00:52:55 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:55.267458 | orchestrator | 2026-02-05 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:58.307039 | orchestrator | 2026-02-05 00:52:58 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state STARTED 2026-02-05 00:52:58.307915 | orchestrator | 2026-02-05 00:52:58 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:52:58.307949 | orchestrator | 2026-02-05 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:01.336625 | orchestrator | 2026-02-05 00:53:01 | INFO  | Task cea68bdf-1c44-4fc2-8ec2-ce80e852033f is in state SUCCESS 2026-02-05 00:53:01.338642 | orchestrator | 2026-02-05 00:53:01.338739 | orchestrator | 2026-02-05 00:53:01.338766 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:53:01.338789 | orchestrator | 2026-02-05 00:53:01.338796 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:53:01.338804 | orchestrator | Thursday 05 February 2026 00:46:59 +0000 (0:00:00.258) 0:00:00.258 ***** 2026-02-05 00:53:01.338811 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.338824 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.338831 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.338897 | orchestrator | 2026-02-05 00:53:01.338908 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:53:01.338915 | orchestrator | Thursday 05 February 2026 00:47:00 +0000 (0:00:00.430) 0:00:00.688 ***** 2026-02-05 00:53:01.338922 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-05 00:53:01.338929 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-05 00:53:01.338935 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-05 00:53:01.338942 | orchestrator | 2026-02-05 00:53:01.338949 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-05 00:53:01.338955 | orchestrator | 2026-02-05 00:53:01.338962 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-05 00:53:01.338968 | orchestrator | Thursday 05 February 2026 00:47:01 +0000 (0:00:00.625) 0:00:01.314 ***** 2026-02-05 00:53:01.338975 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.338982 | orchestrator | 2026-02-05 00:53:01.338989 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-05 00:53:01.339010 | orchestrator | Thursday 05 February 2026 00:47:01 +0000 (0:00:00.481) 0:00:01.795 ***** 2026-02-05 00:53:01.339017 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.339023 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.339030 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.339037 | orchestrator | 2026-02-05 00:53:01.339043 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-05 00:53:01.339050 | orchestrator | Thursday 05 February 2026 00:47:02 +0000 (0:00:00.716) 0:00:02.512 ***** 2026-02-05 00:53:01.339057 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.339063 | orchestrator | 2026-02-05 00:53:01.339074 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-05 00:53:01.339081 | orchestrator | Thursday 05 February 2026 00:47:02 +0000 (0:00:00.571) 0:00:03.084 ***** 2026-02-05 00:53:01.339088 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.339095 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.339101 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.339108 | orchestrator | 2026-02-05 00:53:01.339115 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-05 00:53:01.339121 | orchestrator | Thursday 05 February 2026 00:47:04 +0000 (0:00:01.527) 0:00:04.612 ***** 2026-02-05 00:53:01.339128 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-05 00:53:01.339135 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-05 00:53:01.339141 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-05 00:53:01.339148 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-05 00:53:01.339155 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-05 00:53:01.339163 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-05 00:53:01.339171 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-05 00:53:01.339179 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-05 00:53:01.339187 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-05 00:53:01.339195 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-05 00:53:01.339203 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-05 00:53:01.339210 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-05 00:53:01.339218 | orchestrator | 2026-02-05 00:53:01.339226 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-05 00:53:01.339234 | orchestrator | Thursday 05 February 2026 00:47:07 +0000 (0:00:03.290) 0:00:07.902 ***** 2026-02-05 00:53:01.339242 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-05 00:53:01.339250 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-05 00:53:01.339258 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-05 00:53:01.339267 | orchestrator | 2026-02-05 00:53:01.339279 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-05 00:53:01.339291 | orchestrator | Thursday 05 February 2026 00:47:08 +0000 (0:00:00.739) 0:00:08.642 ***** 2026-02-05 00:53:01.339302 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-05 00:53:01.339313 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-05 00:53:01.339323 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-05 00:53:01.339333 | orchestrator | 2026-02-05 00:53:01.339343 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-05 00:53:01.339355 | orchestrator | Thursday 05 February 2026 00:47:09 +0000 (0:00:01.481) 0:00:10.123 ***** 2026-02-05 00:53:01.339374 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-05 00:53:01.339384 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.339410 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-05 00:53:01.339422 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.339434 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-05 00:53:01.339445 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.339457 | orchestrator | 2026-02-05 00:53:01.339468 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-05 00:53:01.339480 | orchestrator | Thursday 05 February 2026 00:47:10 +0000 (0:00:00.747) 0:00:10.871 ***** 2026-02-05 00:53:01.339493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.339515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.339528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.339540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.339552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:53:01.339563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.339590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.339603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:53:01.339615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:53:01.339626 | orchestrator | 2026-02-05 00:53:01.339636 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-05 00:53:01.339647 | orchestrator | Thursday 05 February 2026 00:47:12 +0000 (0:00:02.249) 0:00:13.120 ***** 2026-02-05 00:53:01.339657 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.339668 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.339679 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.339688 | orchestrator | 2026-02-05 00:53:01.339698 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-05 00:53:01.339709 | orchestrator | Thursday 05 February 2026 00:47:14 +0000 (0:00:01.526) 0:00:14.646 ***** 2026-02-05 00:53:01.339721 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-05 00:53:01.339733 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-05 00:53:01.339744 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-05 00:53:01.339756 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-05 00:53:01.339767 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-05 00:53:01.339778 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-05 00:53:01.339789 | orchestrator | 2026-02-05 00:53:01.339801 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-05 00:53:01.339810 | orchestrator | Thursday 05 February 2026 00:47:16 +0000 (0:00:01.693) 0:00:16.340 ***** 2026-02-05 00:53:01.339816 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.339823 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.339830 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.339836 | orchestrator | 2026-02-05 00:53:01.339843 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-05 00:53:01.339875 | orchestrator | Thursday 05 February 2026 00:47:17 +0000 (0:00:01.090) 0:00:17.431 ***** 2026-02-05 00:53:01.339882 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.339889 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.339896 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.339902 | orchestrator | 2026-02-05 00:53:01.339909 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-05 00:53:01.339916 | orchestrator | Thursday 05 February 2026 00:47:20 +0000 (0:00:03.115) 0:00:20.547 ***** 2026-02-05 00:53:01.339923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.339938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.339946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.339972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9001700372603bf40790ebc529b4212238048a46', '__omit_place_holder__9001700372603bf40790ebc529b4212238048a46'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 00:53:01.339983 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.339991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.339998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.340010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.340020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.340028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9001700372603bf40790ebc529b4212238048a46', '__omit_place_holder__9001700372603bf40790ebc529b4212238048a46'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 00:53:01.340035 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.340041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.340051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.340058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9001700372603bf40790ebc529b4212238048a46', '__omit_place_holder__9001700372603bf40790ebc529b4212238048a46'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 00:53:01.340072 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.340079 | orchestrator | 2026-02-05 00:53:01.340086 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-05 00:53:01.340092 | orchestrator | Thursday 05 February 2026 00:47:21 +0000 (0:00:00.872) 0:00:21.419 ***** 2026-02-05 00:53:01.340099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.340142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9001700372603bf40790ebc529b4212238048a46', '__omit_place_holder__9001700372603bf40790ebc529b4212238048a46'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 00:53:01.340153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.340171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9001700372603bf40790ebc529b4212238048a46', '__omit_place_holder__9001700372603bf40790ebc529b4212238048a46'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 00:53:01.340178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.340198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9001700372603bf40790ebc529b4212238048a46', '__omit_place_holder__9001700372603bf40790ebc529b4212238048a46'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 00:53:01.340209 | orchestrator | 2026-02-05 00:53:01.340215 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-05 00:53:01.340222 | orchestrator | Thursday 05 February 2026 00:47:24 +0000 (0:00:03.181) 0:00:24.601 ***** 2026-02-05 00:53:01.340229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:53:01.340291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:53:01.340298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:53:01.340305 | orchestrator | 2026-02-05 00:53:01.340312 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-05 00:53:01.340319 | orchestrator | Thursday 05 February 2026 00:47:27 +0000 (0:00:03.325) 0:00:27.926 ***** 2026-02-05 00:53:01.340326 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-05 00:53:01.340336 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-05 00:53:01.340343 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-05 00:53:01.340350 | orchestrator | 2026-02-05 00:53:01.340357 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-05 00:53:01.340364 | orchestrator | Thursday 05 February 2026 00:47:31 +0000 (0:00:03.378) 0:00:31.305 ***** 2026-02-05 00:53:01.340370 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-05 00:53:01.340377 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-05 00:53:01.340384 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-05 00:53:01.340390 | orchestrator | 2026-02-05 00:53:01.340397 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-05 00:53:01.340404 | orchestrator | Thursday 05 February 2026 00:47:35 +0000 (0:00:04.755) 0:00:36.060 ***** 2026-02-05 00:53:01.340410 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.340421 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.340428 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.340434 | orchestrator | 2026-02-05 00:53:01.340441 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-05 00:53:01.340447 | orchestrator | Thursday 05 February 2026 00:47:36 +0000 (0:00:00.587) 0:00:36.648 ***** 2026-02-05 00:53:01.340454 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-05 00:53:01.340467 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-05 00:53:01.340478 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-05 00:53:01.340490 | orchestrator | 2026-02-05 00:53:01.340501 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-05 00:53:01.340516 | orchestrator | Thursday 05 February 2026 00:47:38 +0000 (0:00:02.605) 0:00:39.254 ***** 2026-02-05 00:53:01.340527 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-05 00:53:01.340538 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-05 00:53:01.340549 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-05 00:53:01.340561 | orchestrator | 2026-02-05 00:53:01.340573 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-05 00:53:01.340584 | orchestrator | Thursday 05 February 2026 00:47:42 +0000 (0:00:03.469) 0:00:42.723 ***** 2026-02-05 00:53:01.340595 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-05 00:53:01.340605 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-05 00:53:01.340612 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-05 00:53:01.340619 | orchestrator | 2026-02-05 00:53:01.340625 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-05 00:53:01.340632 | orchestrator | Thursday 05 February 2026 00:47:44 +0000 (0:00:01.608) 0:00:44.331 ***** 2026-02-05 00:53:01.340639 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-05 00:53:01.340646 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-05 00:53:01.340652 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-05 00:53:01.340659 | orchestrator | 2026-02-05 00:53:01.340665 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-05 00:53:01.340673 | orchestrator | Thursday 05 February 2026 00:47:46 +0000 (0:00:02.135) 0:00:46.467 ***** 2026-02-05 00:53:01.340685 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.340697 | orchestrator | 2026-02-05 00:53:01.340708 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-05 00:53:01.340718 | orchestrator | Thursday 05 February 2026 00:47:46 +0000 (0:00:00.516) 0:00:46.984 ***** 2026-02-05 00:53:01.340730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.340808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:53:01.340815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:53:01.340831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:53:01.340838 | orchestrator | 2026-02-05 00:53:01.340911 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-05 00:53:01.340921 | orchestrator | Thursday 05 February 2026 00:47:50 +0000 (0:00:03.644) 0:00:50.629 ***** 2026-02-05 00:53:01.340928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.340939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.340946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.340953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.340960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.340976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.340984 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.340991 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.340998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.341005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.341015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.341022 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.341029 | orchestrator | 2026-02-05 00:53:01.341035 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-05 00:53:01.341042 | orchestrator | Thursday 05 February 2026 00:47:51 +0000 (0:00:01.031) 0:00:51.660 ***** 2026-02-05 00:53:01.341049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.341056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.341071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.341078 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.341090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.341103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.341120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.341132 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.341144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.341157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.341178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.341190 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.341200 | orchestrator | 2026-02-05 00:53:01.341207 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-05 00:53:01.341214 | orchestrator | Thursday 05 February 2026 00:47:52 +0000 (0:00:01.016) 0:00:52.676 ***** 2026-02-05 00:53:01.341226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.341234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.341247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.341254 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.341261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.341268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.341279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.341286 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.341296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.341304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.341311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.341317 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.341324 | orchestrator | 2026-02-05 00:53:01.341331 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-05 00:53:01.341338 | orchestrator | Thursday 05 February 2026 00:47:53 +0000 (0:00:01.148) 0:00:53.825 ***** 2026-02-05 00:53:01.341347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.341354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.341365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.341372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.341383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.341390 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.341397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.341404 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.341414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.341421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.341432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.341438 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.341445 | orchestrator | 2026-02-05 00:53:01.341452 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-05 00:53:01.341459 | orchestrator | Thursday 05 February 2026 00:47:54 +0000 (0:00:00.832) 0:00:54.658 ***** 2026-02-05 00:53:01.341466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.341964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.341985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.341998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.342005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.342068 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.342085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.342096 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.342109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.342131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.342138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.342169 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.342176 | orchestrator | 2026-02-05 00:53:01.342183 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-05 00:53:01.342190 | orchestrator | Thursday 05 February 2026 00:47:56 +0000 (0:00:01.785) 0:00:56.444 ***** 2026-02-05 00:53:01.342197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.342208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.342220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.342227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.342234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.342246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.342253 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.342260 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.342285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.342300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.342319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.342330 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.342341 | orchestrator | 2026-02-05 00:53:01.342353 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-05 00:53:01.342364 | orchestrator | Thursday 05 February 2026 00:47:58 +0000 (0:00:02.092) 0:00:58.536 ***** 2026-02-05 00:53:01.342376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.342388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.342406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.342419 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.342431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.342454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.342491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.342504 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.342539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.342553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.342566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.342578 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.342590 | orchestrator | 2026-02-05 00:53:01.342602 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-05 00:53:01.342621 | orchestrator | Thursday 05 February 2026 00:47:58 +0000 (0:00:00.593) 0:00:59.130 ***** 2026-02-05 00:53:01.342633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.342653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.342661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.342668 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.342675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.342682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.342689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.342695 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.342707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:53:01.342714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:53:01.342729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:53:01.342736 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.342742 | orchestrator | 2026-02-05 00:53:01.342765 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-05 00:53:01.342775 | orchestrator | Thursday 05 February 2026 00:47:59 +0000 (0:00:00.692) 0:00:59.823 ***** 2026-02-05 00:53:01.342782 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-05 00:53:01.342800 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-05 00:53:01.342807 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-05 00:53:01.342814 | orchestrator | 2026-02-05 00:53:01.342886 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-05 00:53:01.342894 | orchestrator | Thursday 05 February 2026 00:48:01 +0000 (0:00:01.778) 0:01:01.601 ***** 2026-02-05 00:53:01.342901 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-05 00:53:01.342928 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-05 00:53:01.342944 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-05 00:53:01.342951 | orchestrator | 2026-02-05 00:53:01.342957 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-05 00:53:01.342964 | orchestrator | Thursday 05 February 2026 00:48:02 +0000 (0:00:01.516) 0:01:03.117 ***** 2026-02-05 00:53:01.342971 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 00:53:01.343009 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 00:53:01.343016 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 00:53:01.343023 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 00:53:01.343029 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.343036 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 00:53:01.343043 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.343049 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 00:53:01.343056 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.343063 | orchestrator | 2026-02-05 00:53:01.343069 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-05 00:53:01.343076 | orchestrator | Thursday 05 February 2026 00:48:03 +0000 (0:00:00.780) 0:01:03.898 ***** 2026-02-05 00:53:01.343093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.343101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.343108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 00:53:01.343118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.343125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.343132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:53:01.343143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:53:01.343154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:53:01.343161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:53:01.343168 | orchestrator | 2026-02-05 00:53:01.343175 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-05 00:53:01.343181 | orchestrator | Thursday 05 February 2026 00:48:06 +0000 (0:00:02.621) 0:01:06.520 ***** 2026-02-05 00:53:01.343188 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.343195 | orchestrator | 2026-02-05 00:53:01.343201 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-05 00:53:01.343208 | orchestrator | Thursday 05 February 2026 00:48:07 +0000 (0:00:00.773) 0:01:07.293 ***** 2026-02-05 00:53:01.343219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 00:53:01.343227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.343234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 00:53:01.343267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.343283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 00:53:01.343325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.343343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343367 | orchestrator | 2026-02-05 00:53:01.343379 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-05 00:53:01.343388 | orchestrator | Thursday 05 February 2026 00:48:11 +0000 (0:00:04.729) 0:01:12.022 ***** 2026-02-05 00:53:01.343411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 00:53:01.343419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.343442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343476 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.343498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 00:53:01.343506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.343525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343587 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.343595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 00:53:01.343602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.343629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343661 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.343668 | orchestrator | 2026-02-05 00:53:01.343675 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-05 00:53:01.343681 | orchestrator | Thursday 05 February 2026 00:48:12 +0000 (0:00:00.698) 0:01:12.721 ***** 2026-02-05 00:53:01.343688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-05 00:53:01.343699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-05 00:53:01.343707 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.343714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-05 00:53:01.343721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-05 00:53:01.343727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-05 00:53:01.343741 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.343748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-05 00:53:01.343755 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.343762 | orchestrator | 2026-02-05 00:53:01.343768 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-05 00:53:01.343775 | orchestrator | Thursday 05 February 2026 00:48:13 +0000 (0:00:01.414) 0:01:14.136 ***** 2026-02-05 00:53:01.343782 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.343788 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.343795 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.343802 | orchestrator | 2026-02-05 00:53:01.343808 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-05 00:53:01.343815 | orchestrator | Thursday 05 February 2026 00:48:15 +0000 (0:00:01.235) 0:01:15.371 ***** 2026-02-05 00:53:01.343821 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.343828 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.343835 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.343842 | orchestrator | 2026-02-05 00:53:01.343888 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-05 00:53:01.343895 | orchestrator | Thursday 05 February 2026 00:48:17 +0000 (0:00:01.929) 0:01:17.301 ***** 2026-02-05 00:53:01.343902 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.343909 | orchestrator | 2026-02-05 00:53:01.343915 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-05 00:53:01.343922 | orchestrator | Thursday 05 February 2026 00:48:17 +0000 (0:00:00.797) 0:01:18.099 ***** 2026-02-05 00:53:01.343935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.343943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.343974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.343993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.344000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.344014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.344021 | orchestrator | 2026-02-05 00:53:01.344028 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-05 00:53:01.344035 | orchestrator | Thursday 05 February 2026 00:48:21 +0000 (0:00:03.479) 0:01:21.578 ***** 2026-02-05 00:53:01.344042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.344049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.344061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.344068 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.344075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.344090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.344097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.344104 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.344111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.344122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.344129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.344135 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.344145 | orchestrator | 2026-02-05 00:53:01.344152 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-05 00:53:01.344158 | orchestrator | Thursday 05 February 2026 00:48:21 +0000 (0:00:00.486) 0:01:22.065 ***** 2026-02-05 00:53:01.344165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 00:53:01.344171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 00:53:01.344178 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.344184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 00:53:01.344194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 00:53:01.344200 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.344206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 00:53:01.344213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 00:53:01.344219 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.344225 | orchestrator | 2026-02-05 00:53:01.344232 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-05 00:53:01.344238 | orchestrator | Thursday 05 February 2026 00:48:22 +0000 (0:00:00.858) 0:01:22.924 ***** 2026-02-05 00:53:01.344264 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.344271 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.344277 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.344283 | orchestrator | 2026-02-05 00:53:01.344289 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-05 00:53:01.344295 | orchestrator | Thursday 05 February 2026 00:48:24 +0000 (0:00:01.534) 0:01:24.459 ***** 2026-02-05 00:53:01.344302 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.344308 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.344314 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.344320 | orchestrator | 2026-02-05 00:53:01.344327 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-05 00:53:01.344333 | orchestrator | Thursday 05 February 2026 00:48:26 +0000 (0:00:01.933) 0:01:26.392 ***** 2026-02-05 00:53:01.344339 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.344353 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.344360 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.344366 | orchestrator | 2026-02-05 00:53:01.344372 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-05 00:53:01.344462 | orchestrator | Thursday 05 February 2026 00:48:26 +0000 (0:00:00.224) 0:01:26.617 ***** 2026-02-05 00:53:01.344480 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.344487 | orchestrator | 2026-02-05 00:53:01.344493 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-05 00:53:01.344505 | orchestrator | Thursday 05 February 2026 00:48:26 +0000 (0:00:00.501) 0:01:27.118 ***** 2026-02-05 00:53:01.344525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-05 00:53:01.344543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-05 00:53:01.344555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-05 00:53:01.344573 | orchestrator | 2026-02-05 00:53:01.344580 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-05 00:53:01.344586 | orchestrator | Thursday 05 February 2026 00:48:29 +0000 (0:00:02.588) 0:01:29.706 ***** 2026-02-05 00:53:01.344593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-05 00:53:01.344599 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.344606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-05 00:53:01.344616 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.344627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-05 00:53:01.344634 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.344640 | orchestrator | 2026-02-05 00:53:01.344646 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-05 00:53:01.344652 | orchestrator | Thursday 05 February 2026 00:48:31 +0000 (0:00:01.978) 0:01:31.684 ***** 2026-02-05 00:53:01.344664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 00:53:01.344675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 00:53:01.344683 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.344689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 00:53:01.344696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 00:53:01.344702 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.344709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 00:53:01.344715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 00:53:01.344766 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.344773 | orchestrator | 2026-02-05 00:53:01.344779 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-05 00:53:01.344786 | orchestrator | Thursday 05 February 2026 00:48:33 +0000 (0:00:02.121) 0:01:33.806 ***** 2026-02-05 00:53:01.344792 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.344807 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.344813 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.344820 | orchestrator | 2026-02-05 00:53:01.344826 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-05 00:53:01.344838 | orchestrator | Thursday 05 February 2026 00:48:34 +0000 (0:00:00.635) 0:01:34.441 ***** 2026-02-05 00:53:01.344860 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.344867 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.344873 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.344880 | orchestrator | 2026-02-05 00:53:01.344886 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-05 00:53:01.344896 | orchestrator | Thursday 05 February 2026 00:48:35 +0000 (0:00:01.092) 0:01:35.534 ***** 2026-02-05 00:53:01.344903 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.344924 | orchestrator | 2026-02-05 00:53:01.344930 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-05 00:53:01.344954 | orchestrator | Thursday 05 February 2026 00:48:35 +0000 (0:00:00.659) 0:01:36.194 ***** 2026-02-05 00:53:01.344961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.344972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.344986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.344999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.345048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.345081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345105 | orchestrator | 2026-02-05 00:53:01.345111 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-05 00:53:01.345118 | orchestrator | Thursday 05 February 2026 00:48:39 +0000 (0:00:03.215) 0:01:39.410 ***** 2026-02-05 00:53:01.345127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.345137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345160 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.345167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.345174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345200 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.345206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.345218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345245 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.345251 | orchestrator | 2026-02-05 00:53:01.345258 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-05 00:53:01.345264 | orchestrator | Thursday 05 February 2026 00:48:40 +0000 (0:00:00.943) 0:01:40.353 ***** 2026-02-05 00:53:01.345271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 00:53:01.345277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 00:53:01.345284 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.345290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 00:53:01.345297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 00:53:01.345303 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.345310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 00:53:01.345316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 00:53:01.345322 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.345329 | orchestrator | 2026-02-05 00:53:01.345335 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-05 00:53:01.345341 | orchestrator | Thursday 05 February 2026 00:48:40 +0000 (0:00:00.857) 0:01:41.211 ***** 2026-02-05 00:53:01.345347 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.345353 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.345442 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.345450 | orchestrator | 2026-02-05 00:53:01.345456 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-05 00:53:01.345462 | orchestrator | Thursday 05 February 2026 00:48:42 +0000 (0:00:01.219) 0:01:42.430 ***** 2026-02-05 00:53:01.345469 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.345475 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.345481 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.345487 | orchestrator | 2026-02-05 00:53:01.345497 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-05 00:53:01.345504 | orchestrator | Thursday 05 February 2026 00:48:44 +0000 (0:00:01.982) 0:01:44.413 ***** 2026-02-05 00:53:01.345510 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.345516 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.345522 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.345529 | orchestrator | 2026-02-05 00:53:01.345535 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-05 00:53:01.345541 | orchestrator | Thursday 05 February 2026 00:48:44 +0000 (0:00:00.327) 0:01:44.741 ***** 2026-02-05 00:53:01.345547 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.345553 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.345560 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.345570 | orchestrator | 2026-02-05 00:53:01.345576 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-05 00:53:01.345582 | orchestrator | Thursday 05 February 2026 00:48:45 +0000 (0:00:00.660) 0:01:45.401 ***** 2026-02-05 00:53:01.345588 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.345595 | orchestrator | 2026-02-05 00:53:01.345601 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-05 00:53:01.345607 | orchestrator | Thursday 05 February 2026 00:48:45 +0000 (0:00:00.846) 0:01:46.247 ***** 2026-02-05 00:53:01.345617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 00:53:01.345624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 00:53:01.345632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 00:53:01.345676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 00:53:01.345689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 00:53:01.345748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 00:53:01.345755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345801 | orchestrator | 2026-02-05 00:53:01.345808 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-05 00:53:01.345814 | orchestrator | Thursday 05 February 2026 00:48:50 +0000 (0:00:04.789) 0:01:51.037 ***** 2026-02-05 00:53:01.345821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 00:53:01.345830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 00:53:01.345841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 00:53:01.345878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 00:53:01.345892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345920 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.345929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345959 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.345969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 00:53:01.345976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 00:53:01.345987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.345994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.346001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.346007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.346055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.346062 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.346069 | orchestrator | 2026-02-05 00:53:01.346075 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-05 00:53:01.346081 | orchestrator | Thursday 05 February 2026 00:48:51 +0000 (0:00:01.008) 0:01:52.045 ***** 2026-02-05 00:53:01.346088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-05 00:53:01.346095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-05 00:53:01.346101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-05 00:53:01.346107 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.346114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-05 00:53:01.346120 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.346126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-05 00:53:01.346132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-05 00:53:01.346138 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.346145 | orchestrator | 2026-02-05 00:53:01.346155 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-05 00:53:01.346161 | orchestrator | Thursday 05 February 2026 00:48:52 +0000 (0:00:01.142) 0:01:53.188 ***** 2026-02-05 00:53:01.346167 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.346173 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.346179 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.346185 | orchestrator | 2026-02-05 00:53:01.346191 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-05 00:53:01.346198 | orchestrator | Thursday 05 February 2026 00:48:54 +0000 (0:00:01.275) 0:01:54.463 ***** 2026-02-05 00:53:01.346204 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.346210 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.346216 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.346222 | orchestrator | 2026-02-05 00:53:01.346228 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-05 00:53:01.346234 | orchestrator | Thursday 05 February 2026 00:48:56 +0000 (0:00:01.911) 0:01:56.375 ***** 2026-02-05 00:53:01.346244 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.346250 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.346256 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.346262 | orchestrator | 2026-02-05 00:53:01.346268 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-05 00:53:01.346275 | orchestrator | Thursday 05 February 2026 00:48:56 +0000 (0:00:00.415) 0:01:56.790 ***** 2026-02-05 00:53:01.346281 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.346287 | orchestrator | 2026-02-05 00:53:01.346293 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-05 00:53:01.346299 | orchestrator | Thursday 05 February 2026 00:48:57 +0000 (0:00:00.748) 0:01:57.538 ***** 2026-02-05 00:53:01.346311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 00:53:01.346323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 00:53:01.346335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 00:53:01.346350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 00:53:01.346362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 00:53:01.346374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 00:53:01.346381 | orchestrator | 2026-02-05 00:53:01.346388 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-05 00:53:01.346394 | orchestrator | Thursday 05 February 2026 00:49:01 +0000 (0:00:04.244) 0:02:01.782 ***** 2026-02-05 00:53:01.346406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 00:53:01.346421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 00:53:01.346428 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.346438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 00:53:01.346452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 00:53:01.346459 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.346469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 00:53:01.346483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 00:53:01.346490 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.346497 | orchestrator | 2026-02-05 00:53:01.346503 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-05 00:53:01.346510 | orchestrator | Thursday 05 February 2026 00:49:05 +0000 (0:00:04.313) 0:02:06.095 ***** 2026-02-05 00:53:01.346516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 00:53:01.346523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 00:53:01.346533 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.346542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 00:53:01.346549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 00:53:01.346555 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.346562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 00:53:01.346569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 00:53:01.346575 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.346581 | orchestrator | 2026-02-05 00:53:01.346588 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-05 00:53:01.346594 | orchestrator | Thursday 05 February 2026 00:49:09 +0000 (0:00:03.576) 0:02:09.672 ***** 2026-02-05 00:53:01.346600 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.346607 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.346613 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.346619 | orchestrator | 2026-02-05 00:53:01.346625 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-05 00:53:01.346632 | orchestrator | Thursday 05 February 2026 00:49:10 +0000 (0:00:01.224) 0:02:10.896 ***** 2026-02-05 00:53:01.346638 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.346644 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.346650 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.346656 | orchestrator | 2026-02-05 00:53:01.346666 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-05 00:53:01.346672 | orchestrator | Thursday 05 February 2026 00:49:12 +0000 (0:00:01.856) 0:02:12.753 ***** 2026-02-05 00:53:01.346678 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.346685 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.346691 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.346697 | orchestrator | 2026-02-05 00:53:01.346703 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-05 00:53:01.346709 | orchestrator | Thursday 05 February 2026 00:49:12 +0000 (0:00:00.280) 0:02:13.033 ***** 2026-02-05 00:53:01.346716 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.346727 | orchestrator | 2026-02-05 00:53:01.346733 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-05 00:53:01.346739 | orchestrator | Thursday 05 February 2026 00:49:13 +0000 (0:00:00.958) 0:02:13.992 ***** 2026-02-05 00:53:01.346746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 00:53:01.346756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 00:53:01.346762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 00:53:01.346769 | orchestrator | 2026-02-05 00:53:01.346775 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-05 00:53:01.346781 | orchestrator | Thursday 05 February 2026 00:49:18 +0000 (0:00:04.897) 0:02:18.889 ***** 2026-02-05 00:53:01.346788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 00:53:01.346794 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.346810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 00:53:01.346820 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.346827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 00:53:01.346833 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.346840 | orchestrator | 2026-02-05 00:53:01.346856 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-05 00:53:01.346863 | orchestrator | Thursday 05 February 2026 00:49:19 +0000 (0:00:00.460) 0:02:19.350 ***** 2026-02-05 00:53:01.346869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-05 00:53:01.346875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-05 00:53:01.346882 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.346891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-05 00:53:01.346897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-05 00:53:01.346903 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.346909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-05 00:53:01.346916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-05 00:53:01.346922 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.346928 | orchestrator | 2026-02-05 00:53:01.346935 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-05 00:53:01.346941 | orchestrator | Thursday 05 February 2026 00:49:19 +0000 (0:00:00.791) 0:02:20.141 ***** 2026-02-05 00:53:01.346947 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.346953 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.346959 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.346966 | orchestrator | 2026-02-05 00:53:01.346972 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-05 00:53:01.346978 | orchestrator | Thursday 05 February 2026 00:49:21 +0000 (0:00:01.248) 0:02:21.389 ***** 2026-02-05 00:53:01.346984 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.346990 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.346997 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.347003 | orchestrator | 2026-02-05 00:53:01.347009 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-05 00:53:01.347015 | orchestrator | Thursday 05 February 2026 00:49:22 +0000 (0:00:01.847) 0:02:23.237 ***** 2026-02-05 00:53:01.347021 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.347028 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.347039 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.347045 | orchestrator | 2026-02-05 00:53:01.347052 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-05 00:53:01.347058 | orchestrator | Thursday 05 February 2026 00:49:23 +0000 (0:00:00.275) 0:02:23.512 ***** 2026-02-05 00:53:01.347064 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.347070 | orchestrator | 2026-02-05 00:53:01.347076 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-05 00:53:01.347082 | orchestrator | Thursday 05 February 2026 00:49:24 +0000 (0:00:00.997) 0:02:24.510 ***** 2026-02-05 00:53:01.347098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:53:01.347107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:53:01.347129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:53:01.347136 | orchestrator | 2026-02-05 00:53:01.347142 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-05 00:53:01.347149 | orchestrator | Thursday 05 February 2026 00:49:27 +0000 (0:00:03.223) 0:02:27.734 ***** 2026-02-05 00:53:01.347159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:53:01.347170 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.347197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:53:01.347204 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.347215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:53:01.347226 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.347232 | orchestrator | 2026-02-05 00:53:01.347238 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-05 00:53:01.347245 | orchestrator | Thursday 05 February 2026 00:49:28 +0000 (0:00:00.579) 0:02:28.313 ***** 2026-02-05 00:53:01.347251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 00:53:01.347258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 00:53:01.347268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 00:53:01.347275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 00:53:01.347282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-05 00:53:01.347289 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.347295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 00:53:01.347305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 00:53:01.347311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 00:53:01.347318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 00:53:01.347324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-05 00:53:01.347330 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.347340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 00:53:01.347346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 00:53:01.347353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 00:53:01.347359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 00:53:01.347366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-05 00:53:01.347372 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.347378 | orchestrator | 2026-02-05 00:53:01.347385 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-05 00:53:01.347391 | orchestrator | Thursday 05 February 2026 00:49:29 +0000 (0:00:01.380) 0:02:29.694 ***** 2026-02-05 00:53:01.347397 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.347403 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.347412 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.347418 | orchestrator | 2026-02-05 00:53:01.347425 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-05 00:53:01.347431 | orchestrator | Thursday 05 February 2026 00:49:30 +0000 (0:00:01.362) 0:02:31.056 ***** 2026-02-05 00:53:01.347437 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.347447 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.347453 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.347459 | orchestrator | 2026-02-05 00:53:01.347466 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-05 00:53:01.347472 | orchestrator | Thursday 05 February 2026 00:49:32 +0000 (0:00:01.976) 0:02:33.033 ***** 2026-02-05 00:53:01.347478 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.347484 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.347490 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.347496 | orchestrator | 2026-02-05 00:53:01.347502 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-05 00:53:01.347508 | orchestrator | Thursday 05 February 2026 00:49:33 +0000 (0:00:00.286) 0:02:33.319 ***** 2026-02-05 00:53:01.347515 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.347521 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.347527 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.347533 | orchestrator | 2026-02-05 00:53:01.347539 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-05 00:53:01.347545 | orchestrator | Thursday 05 February 2026 00:49:33 +0000 (0:00:00.277) 0:02:33.597 ***** 2026-02-05 00:53:01.347551 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.347557 | orchestrator | 2026-02-05 00:53:01.347563 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-05 00:53:01.347569 | orchestrator | Thursday 05 February 2026 00:49:34 +0000 (0:00:01.091) 0:02:34.689 ***** 2026-02-05 00:53:01.347577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:53:01.347587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:53:01.347595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:53:01.347608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:53:01.347615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:53:01.347622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:53:01.347632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:53:01.347639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:53:01.347646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:53:01.347656 | orchestrator | 2026-02-05 00:53:01.347665 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-05 00:53:01.347672 | orchestrator | Thursday 05 February 2026 00:49:37 +0000 (0:00:03.373) 0:02:38.062 ***** 2026-02-05 00:53:01.347678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:53:01.347685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:53:01.347692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:53:01.347699 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.347709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:53:01.347720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:53:01.347729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:53:01.347736 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.347742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:53:01.347749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:53:01.347760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:53:01.347766 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.347779 | orchestrator | 2026-02-05 00:53:01.347785 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-05 00:53:01.347791 | orchestrator | Thursday 05 February 2026 00:49:38 +0000 (0:00:00.504) 0:02:38.566 ***** 2026-02-05 00:53:01.347798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 00:53:01.347805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 00:53:01.347811 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.347817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 00:53:01.347829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 00:53:01.347835 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.347842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 00:53:01.347890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 00:53:01.347901 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.347911 | orchestrator | 2026-02-05 00:53:01.347921 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-05 00:53:01.347931 | orchestrator | Thursday 05 February 2026 00:49:39 +0000 (0:00:00.982) 0:02:39.549 ***** 2026-02-05 00:53:01.347941 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.347951 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.347961 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.347972 | orchestrator | 2026-02-05 00:53:01.347981 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-05 00:53:01.347991 | orchestrator | Thursday 05 February 2026 00:49:40 +0000 (0:00:01.336) 0:02:40.886 ***** 2026-02-05 00:53:01.348001 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.348011 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.348021 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.348028 | orchestrator | 2026-02-05 00:53:01.348034 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-05 00:53:01.348040 | orchestrator | Thursday 05 February 2026 00:49:42 +0000 (0:00:01.936) 0:02:42.823 ***** 2026-02-05 00:53:01.348046 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.348053 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.348059 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.348065 | orchestrator | 2026-02-05 00:53:01.348071 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-05 00:53:01.348077 | orchestrator | Thursday 05 February 2026 00:49:42 +0000 (0:00:00.273) 0:02:43.097 ***** 2026-02-05 00:53:01.348083 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.348089 | orchestrator | 2026-02-05 00:53:01.348095 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-05 00:53:01.348115 | orchestrator | Thursday 05 February 2026 00:49:43 +0000 (0:00:01.157) 0:02:44.255 ***** 2026-02-05 00:53:01.348127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 00:53:01.348134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 00:53:01.348157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 00:53:01.348176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348182 | orchestrator | 2026-02-05 00:53:01.348187 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-05 00:53:01.348193 | orchestrator | Thursday 05 February 2026 00:49:47 +0000 (0:00:03.412) 0:02:47.667 ***** 2026-02-05 00:53:01.348198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 00:53:01.348207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348213 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.348218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 00:53:01.348229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348235 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.348244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 00:53:01.348250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348258 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.348264 | orchestrator | 2026-02-05 00:53:01.348269 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-05 00:53:01.348275 | orchestrator | Thursday 05 February 2026 00:49:47 +0000 (0:00:00.590) 0:02:48.257 ***** 2026-02-05 00:53:01.348280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-05 00:53:01.348286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-05 00:53:01.348292 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.348297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-05 00:53:01.348303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-05 00:53:01.348308 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.348314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-05 00:53:01.348323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-05 00:53:01.348328 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.348333 | orchestrator | 2026-02-05 00:53:01.348339 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-05 00:53:01.348344 | orchestrator | Thursday 05 February 2026 00:49:49 +0000 (0:00:01.039) 0:02:49.297 ***** 2026-02-05 00:53:01.348350 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.348355 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.348360 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.348366 | orchestrator | 2026-02-05 00:53:01.348371 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-05 00:53:01.348377 | orchestrator | Thursday 05 February 2026 00:49:50 +0000 (0:00:01.357) 0:02:50.654 ***** 2026-02-05 00:53:01.348382 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.348387 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.348393 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.348398 | orchestrator | 2026-02-05 00:53:01.348403 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-05 00:53:01.348409 | orchestrator | Thursday 05 February 2026 00:49:52 +0000 (0:00:01.821) 0:02:52.476 ***** 2026-02-05 00:53:01.348414 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.348420 | orchestrator | 2026-02-05 00:53:01.348425 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-05 00:53:01.348431 | orchestrator | Thursday 05 February 2026 00:49:53 +0000 (0:00:01.139) 0:02:53.615 ***** 2026-02-05 00:53:01.348440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 00:53:01.348446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 00:53:01.348478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 00:53:01.348507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348527 | orchestrator | 2026-02-05 00:53:01.348533 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-05 00:53:01.348538 | orchestrator | Thursday 05 February 2026 00:49:56 +0000 (0:00:03.233) 0:02:56.849 ***** 2026-02-05 00:53:01.348544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 00:53:01.348553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348573 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.348579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 00:53:01.348678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 00:53:01.348710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348716 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.348722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.348743 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.348748 | orchestrator | 2026-02-05 00:53:01.348754 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-05 00:53:01.348759 | orchestrator | Thursday 05 February 2026 00:49:57 +0000 (0:00:00.674) 0:02:57.523 ***** 2026-02-05 00:53:01.348765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-05 00:53:01.348770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-05 00:53:01.348776 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.348781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-05 00:53:01.348790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-05 00:53:01.348796 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.348801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-05 00:53:01.348809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-05 00:53:01.348815 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.348820 | orchestrator | 2026-02-05 00:53:01.348826 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-05 00:53:01.348831 | orchestrator | Thursday 05 February 2026 00:49:58 +0000 (0:00:00.923) 0:02:58.446 ***** 2026-02-05 00:53:01.348837 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.348842 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.348860 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.348865 | orchestrator | 2026-02-05 00:53:01.348871 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-05 00:53:01.348876 | orchestrator | Thursday 05 February 2026 00:49:59 +0000 (0:00:01.225) 0:02:59.672 ***** 2026-02-05 00:53:01.348882 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.348887 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.348893 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.348898 | orchestrator | 2026-02-05 00:53:01.348903 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-05 00:53:01.348909 | orchestrator | Thursday 05 February 2026 00:50:01 +0000 (0:00:01.859) 0:03:01.532 ***** 2026-02-05 00:53:01.348915 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.348920 | orchestrator | 2026-02-05 00:53:01.348925 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-05 00:53:01.348931 | orchestrator | Thursday 05 February 2026 00:50:02 +0000 (0:00:01.011) 0:03:02.544 ***** 2026-02-05 00:53:01.348937 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 00:53:01.348942 | orchestrator | 2026-02-05 00:53:01.348948 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-05 00:53:01.348953 | orchestrator | Thursday 05 February 2026 00:50:05 +0000 (0:00:03.542) 0:03:06.086 ***** 2026-02-05 00:53:01.348964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:53:01.348974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 00:53:01.348980 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.348989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:53:01.348996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 00:53:01.349001 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.349013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:53:01.349023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 00:53:01.349029 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.349035 | orchestrator | 2026-02-05 00:53:01.349040 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-05 00:53:01.349046 | orchestrator | Thursday 05 February 2026 00:50:08 +0000 (0:00:02.584) 0:03:08.670 ***** 2026-02-05 00:53:01.349055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:53:01.349064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 00:53:01.349070 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.349079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:53:01.349089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:53:01.349099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 00:53:01.349105 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.349113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 00:53:01.349131 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.349137 | orchestrator | 2026-02-05 00:53:01.349143 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-05 00:53:01.349148 | orchestrator | Thursday 05 February 2026 00:50:10 +0000 (0:00:01.950) 0:03:10.620 ***** 2026-02-05 00:53:01.349154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 00:53:01.349160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 00:53:01.349166 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.349171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 00:53:01.349183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 00:53:01.349189 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.349195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 00:53:01.349201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 00:53:01.349206 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.349212 | orchestrator | 2026-02-05 00:53:01.349220 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-05 00:53:01.349225 | orchestrator | Thursday 05 February 2026 00:50:12 +0000 (0:00:02.141) 0:03:12.761 ***** 2026-02-05 00:53:01.349231 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.349236 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.349242 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.349247 | orchestrator | 2026-02-05 00:53:01.349253 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-05 00:53:01.349258 | orchestrator | Thursday 05 February 2026 00:50:14 +0000 (0:00:01.959) 0:03:14.721 ***** 2026-02-05 00:53:01.349264 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.349269 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.349274 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.349280 | orchestrator | 2026-02-05 00:53:01.349285 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-05 00:53:01.349291 | orchestrator | Thursday 05 February 2026 00:50:15 +0000 (0:00:01.000) 0:03:15.722 ***** 2026-02-05 00:53:01.349297 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.349304 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.349310 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.349317 | orchestrator | 2026-02-05 00:53:01.349323 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-05 00:53:01.349330 | orchestrator | Thursday 05 February 2026 00:50:15 +0000 (0:00:00.392) 0:03:16.115 ***** 2026-02-05 00:53:01.349336 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.349343 | orchestrator | 2026-02-05 00:53:01.349349 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-05 00:53:01.349359 | orchestrator | Thursday 05 February 2026 00:50:17 +0000 (0:00:01.212) 0:03:17.327 ***** 2026-02-05 00:53:01.349366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 00:53:01.349377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 00:53:01.349385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 00:53:01.349392 | orchestrator | 2026-02-05 00:53:01.349399 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-05 00:53:01.349405 | orchestrator | Thursday 05 February 2026 00:50:18 +0000 (0:00:01.440) 0:03:18.767 ***** 2026-02-05 00:53:01.349415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 00:53:01.349422 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.349428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 00:53:01.349440 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.349447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 00:53:01.349454 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.349460 | orchestrator | 2026-02-05 00:53:01.349467 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-05 00:53:01.349473 | orchestrator | Thursday 05 February 2026 00:50:19 +0000 (0:00:00.630) 0:03:19.398 ***** 2026-02-05 00:53:01.349480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-05 00:53:01.349487 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.349496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-05 00:53:01.349503 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.349510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-05 00:53:01.349517 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.349523 | orchestrator | 2026-02-05 00:53:01.349530 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-05 00:53:01.349536 | orchestrator | Thursday 05 February 2026 00:50:19 +0000 (0:00:00.561) 0:03:19.959 ***** 2026-02-05 00:53:01.349543 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.349549 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.349556 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.349562 | orchestrator | 2026-02-05 00:53:01.349569 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-05 00:53:01.349576 | orchestrator | Thursday 05 February 2026 00:50:20 +0000 (0:00:00.445) 0:03:20.404 ***** 2026-02-05 00:53:01.349582 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.349589 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.349595 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.349601 | orchestrator | 2026-02-05 00:53:01.349608 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-05 00:53:01.349614 | orchestrator | Thursday 05 February 2026 00:50:21 +0000 (0:00:01.196) 0:03:21.601 ***** 2026-02-05 00:53:01.349621 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.349628 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.349635 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.349641 | orchestrator | 2026-02-05 00:53:01.349652 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-05 00:53:01.349657 | orchestrator | Thursday 05 February 2026 00:50:21 +0000 (0:00:00.428) 0:03:22.030 ***** 2026-02-05 00:53:01.349665 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.349671 | orchestrator | 2026-02-05 00:53:01.349676 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-05 00:53:01.349682 | orchestrator | Thursday 05 February 2026 00:50:22 +0000 (0:00:01.104) 0:03:23.135 ***** 2026-02-05 00:53:01.349692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 00:53:01.349702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.349716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 00:53:01.349726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.349738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.349752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.349761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 00:53:01.349776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 00:53:01.349786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.349805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.349815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.349825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.349836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 00:53:01.349891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.349905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.349927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.349937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.349947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.349957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 00:53:01.349971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.349980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.349994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:53:01.350046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:53:01.350143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:53:01.350149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 00:53:01.350190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:53:01.350216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 00:53:01.350344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:53:01.350357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 00:53:01.350371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:53:01.350376 | orchestrator | 2026-02-05 00:53:01.350382 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-05 00:53:01.350387 | orchestrator | Thursday 05 February 2026 00:50:26 +0000 (0:00:03.897) 0:03:27.032 ***** 2026-02-05 00:53:01.350392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 00:53:01.350446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 00:53:01.350496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:53:01.350571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 00:53:01.350633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:53:01.350638 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.350644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 00:53:01.350649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 00:53:01.350773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:53:01.350866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 00:53:01.350875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.350947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.350967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 00:53:01.350977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 00:53:01.351014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:53:01.351021 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.351026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.351031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.351039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.351045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.351050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:53:01.351059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.351079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 00:53:01.351084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:53:01.351099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.351105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 00:53:01.351113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:53:01.351125 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.351130 | orchestrator | 2026-02-05 00:53:01.351135 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-05 00:53:01.351140 | orchestrator | Thursday 05 February 2026 00:50:28 +0000 (0:00:01.523) 0:03:28.556 ***** 2026-02-05 00:53:01.351145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-05 00:53:01.351150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-05 00:53:01.351155 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.351179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-05 00:53:01.351185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-05 00:53:01.351190 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.351195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-05 00:53:01.351200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-05 00:53:01.351205 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.351210 | orchestrator | 2026-02-05 00:53:01.351218 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-05 00:53:01.351227 | orchestrator | Thursday 05 February 2026 00:50:29 +0000 (0:00:01.212) 0:03:29.769 ***** 2026-02-05 00:53:01.351234 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.351242 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.351249 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.351257 | orchestrator | 2026-02-05 00:53:01.351265 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-05 00:53:01.351273 | orchestrator | Thursday 05 February 2026 00:50:30 +0000 (0:00:01.239) 0:03:31.008 ***** 2026-02-05 00:53:01.351280 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.351287 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.351294 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.351302 | orchestrator | 2026-02-05 00:53:01.351310 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-05 00:53:01.351319 | orchestrator | Thursday 05 February 2026 00:50:32 +0000 (0:00:01.996) 0:03:33.005 ***** 2026-02-05 00:53:01.351327 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.351335 | orchestrator | 2026-02-05 00:53:01.351358 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-05 00:53:01.351368 | orchestrator | Thursday 05 February 2026 00:50:34 +0000 (0:00:01.500) 0:03:34.506 ***** 2026-02-05 00:53:01.351374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.351380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.351405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.351411 | orchestrator | 2026-02-05 00:53:01.351416 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-05 00:53:01.351421 | orchestrator | Thursday 05 February 2026 00:50:37 +0000 (0:00:03.062) 0:03:37.568 ***** 2026-02-05 00:53:01.351426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.351435 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.351444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.351449 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.351454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.351459 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.351464 | orchestrator | 2026-02-05 00:53:01.351469 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-05 00:53:01.351474 | orchestrator | Thursday 05 February 2026 00:50:37 +0000 (0:00:00.450) 0:03:38.019 ***** 2026-02-05 00:53:01.351479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 00:53:01.351484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 00:53:01.351489 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.351513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 00:53:01.351522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 00:53:01.351530 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.351538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 00:53:01.351546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 00:53:01.351558 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.351566 | orchestrator | 2026-02-05 00:53:01.351574 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-05 00:53:01.351582 | orchestrator | Thursday 05 February 2026 00:50:38 +0000 (0:00:00.959) 0:03:38.978 ***** 2026-02-05 00:53:01.351591 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.351599 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.351607 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.351666 | orchestrator | 2026-02-05 00:53:01.351678 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-05 00:53:01.351687 | orchestrator | Thursday 05 February 2026 00:50:39 +0000 (0:00:01.201) 0:03:40.179 ***** 2026-02-05 00:53:01.351696 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.351705 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.351713 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.351722 | orchestrator | 2026-02-05 00:53:01.351730 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-05 00:53:01.351743 | orchestrator | Thursday 05 February 2026 00:50:41 +0000 (0:00:01.990) 0:03:42.170 ***** 2026-02-05 00:53:01.351752 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.351761 | orchestrator | 2026-02-05 00:53:01.351770 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-05 00:53:01.351779 | orchestrator | Thursday 05 February 2026 00:50:43 +0000 (0:00:01.518) 0:03:43.688 ***** 2026-02-05 00:53:01.351788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.351799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.351842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.351895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.351906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.351913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.351919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.351944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.351954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.351959 | orchestrator | 2026-02-05 00:53:01.351966 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-05 00:53:01.351975 | orchestrator | Thursday 05 February 2026 00:50:47 +0000 (0:00:04.124) 0:03:47.812 ***** 2026-02-05 00:53:01.351988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.351997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.352006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.352014 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.352047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.352062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.352074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.352082 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.352091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.352100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.352133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.352140 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.352145 | orchestrator | 2026-02-05 00:53:01.352150 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-05 00:53:01.352154 | orchestrator | Thursday 05 February 2026 00:50:48 +0000 (0:00:00.651) 0:03:48.463 ***** 2026-02-05 00:53:01.352160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 00:53:01.352165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 00:53:01.352171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 00:53:01.352179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 00:53:01.352184 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.352189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 00:53:01.352194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 00:53:01.352199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 00:53:01.352204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 00:53:01.352209 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.352214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 00:53:01.352218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 00:53:01.352223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 00:53:01.352232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 00:53:01.352237 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.352242 | orchestrator | 2026-02-05 00:53:01.352247 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-05 00:53:01.352252 | orchestrator | Thursday 05 February 2026 00:50:49 +0000 (0:00:01.148) 0:03:49.612 ***** 2026-02-05 00:53:01.352256 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.352261 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.352266 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.352271 | orchestrator | 2026-02-05 00:53:01.352276 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-05 00:53:01.352280 | orchestrator | Thursday 05 February 2026 00:50:50 +0000 (0:00:01.329) 0:03:50.942 ***** 2026-02-05 00:53:01.352285 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.352290 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.352295 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.352300 | orchestrator | 2026-02-05 00:53:01.352319 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-05 00:53:01.352324 | orchestrator | Thursday 05 February 2026 00:50:52 +0000 (0:00:02.010) 0:03:52.953 ***** 2026-02-05 00:53:01.352329 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.352334 | orchestrator | 2026-02-05 00:53:01.352339 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-05 00:53:01.352344 | orchestrator | Thursday 05 February 2026 00:50:53 +0000 (0:00:01.302) 0:03:54.255 ***** 2026-02-05 00:53:01.352349 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-05 00:53:01.352354 | orchestrator | 2026-02-05 00:53:01.352359 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-05 00:53:01.352363 | orchestrator | Thursday 05 February 2026 00:50:54 +0000 (0:00:00.988) 0:03:55.243 ***** 2026-02-05 00:53:01.352369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-05 00:53:01.352377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-05 00:53:01.352382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-05 00:53:01.352388 | orchestrator | 2026-02-05 00:53:01.352392 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-05 00:53:01.352401 | orchestrator | Thursday 05 February 2026 00:50:58 +0000 (0:00:03.447) 0:03:58.691 ***** 2026-02-05 00:53:01.352406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:53:01.352411 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.352416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:53:01.352421 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.352426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:53:01.352431 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.352436 | orchestrator | 2026-02-05 00:53:01.352453 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-05 00:53:01.352459 | orchestrator | Thursday 05 February 2026 00:50:59 +0000 (0:00:01.155) 0:03:59.846 ***** 2026-02-05 00:53:01.352464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 00:53:01.352469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 00:53:01.352475 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.352480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 00:53:01.352485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 00:53:01.352490 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.352495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 00:53:01.352502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 00:53:01.352508 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.352512 | orchestrator | 2026-02-05 00:53:01.352517 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-05 00:53:01.352527 | orchestrator | Thursday 05 February 2026 00:51:01 +0000 (0:00:01.435) 0:04:01.282 ***** 2026-02-05 00:53:01.352532 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.352537 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.352542 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.352547 | orchestrator | 2026-02-05 00:53:01.352551 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-05 00:53:01.352556 | orchestrator | Thursday 05 February 2026 00:51:03 +0000 (0:00:02.492) 0:04:03.774 ***** 2026-02-05 00:53:01.352561 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.352566 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.352571 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.352575 | orchestrator | 2026-02-05 00:53:01.352580 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-05 00:53:01.352585 | orchestrator | Thursday 05 February 2026 00:51:06 +0000 (0:00:02.905) 0:04:06.680 ***** 2026-02-05 00:53:01.352590 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-05 00:53:01.352595 | orchestrator | 2026-02-05 00:53:01.352600 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-05 00:53:01.352605 | orchestrator | Thursday 05 February 2026 00:51:07 +0000 (0:00:00.739) 0:04:07.419 ***** 2026-02-05 00:53:01.352610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:53:01.352615 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.352620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:53:01.352625 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.352643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:53:01.352649 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.352654 | orchestrator | 2026-02-05 00:53:01.352659 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-05 00:53:01.352664 | orchestrator | Thursday 05 February 2026 00:51:08 +0000 (0:00:01.135) 0:04:08.555 ***** 2026-02-05 00:53:01.352669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:53:01.352677 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.352684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:53:01.352689 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.352694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:53:01.352699 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.352704 | orchestrator | 2026-02-05 00:53:01.352709 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-05 00:53:01.352714 | orchestrator | Thursday 05 February 2026 00:51:09 +0000 (0:00:01.130) 0:04:09.686 ***** 2026-02-05 00:53:01.352718 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.352723 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.352728 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.352733 | orchestrator | 2026-02-05 00:53:01.352737 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-05 00:53:01.352742 | orchestrator | Thursday 05 February 2026 00:51:10 +0000 (0:00:01.356) 0:04:11.043 ***** 2026-02-05 00:53:01.352747 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.352752 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.352757 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.352762 | orchestrator | 2026-02-05 00:53:01.352766 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-05 00:53:01.352771 | orchestrator | Thursday 05 February 2026 00:51:13 +0000 (0:00:02.307) 0:04:13.350 ***** 2026-02-05 00:53:01.352776 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.352781 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.352786 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.352790 | orchestrator | 2026-02-05 00:53:01.352795 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-05 00:53:01.352800 | orchestrator | Thursday 05 February 2026 00:51:16 +0000 (0:00:03.381) 0:04:16.732 ***** 2026-02-05 00:53:01.352805 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-05 00:53:01.352810 | orchestrator | 2026-02-05 00:53:01.352814 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-05 00:53:01.352819 | orchestrator | Thursday 05 February 2026 00:51:17 +0000 (0:00:00.946) 0:04:17.678 ***** 2026-02-05 00:53:01.352838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 00:53:01.352855 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.352867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 00:53:01.352872 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.352877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 00:53:01.352882 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.352887 | orchestrator | 2026-02-05 00:53:01.352892 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-05 00:53:01.352897 | orchestrator | Thursday 05 February 2026 00:51:18 +0000 (0:00:00.929) 0:04:18.608 ***** 2026-02-05 00:53:01.352904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 00:53:01.352910 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.352915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 00:53:01.352919 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.352924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 00:53:01.352929 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.352934 | orchestrator | 2026-02-05 00:53:01.352939 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-05 00:53:01.352944 | orchestrator | Thursday 05 February 2026 00:51:19 +0000 (0:00:01.124) 0:04:19.732 ***** 2026-02-05 00:53:01.352948 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.352953 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.352958 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.352963 | orchestrator | 2026-02-05 00:53:01.352968 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-05 00:53:01.352972 | orchestrator | Thursday 05 February 2026 00:51:21 +0000 (0:00:01.601) 0:04:21.334 ***** 2026-02-05 00:53:01.352980 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.352985 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.352990 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.352995 | orchestrator | 2026-02-05 00:53:01.352999 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-05 00:53:01.353004 | orchestrator | Thursday 05 February 2026 00:51:23 +0000 (0:00:02.154) 0:04:23.488 ***** 2026-02-05 00:53:01.353009 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.353014 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.353018 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.353023 | orchestrator | 2026-02-05 00:53:01.353028 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-05 00:53:01.353033 | orchestrator | Thursday 05 February 2026 00:51:26 +0000 (0:00:02.797) 0:04:26.285 ***** 2026-02-05 00:53:01.353053 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.353059 | orchestrator | 2026-02-05 00:53:01.353066 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-05 00:53:01.353075 | orchestrator | Thursday 05 February 2026 00:51:27 +0000 (0:00:01.246) 0:04:27.532 ***** 2026-02-05 00:53:01.353084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.353097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 00:53:01.353107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.353116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.353130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.353161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.353171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.353183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 00:53:01.353192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.353199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 00:53:01.353211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.353239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.353248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.353257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.353269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.353278 | orchestrator | 2026-02-05 00:53:01.353286 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-05 00:53:01.353294 | orchestrator | Thursday 05 February 2026 00:51:30 +0000 (0:00:03.509) 0:04:31.042 ***** 2026-02-05 00:53:01.353302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.353317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 00:53:01.353353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.353364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.353372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.353380 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.353392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.353405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 00:53:01.353414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.353447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.353459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.353472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.353554 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.353564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 00:53:01.353582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.353590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 00:53:01.353631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:53:01.353643 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.353651 | orchestrator | 2026-02-05 00:53:01.353659 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-05 00:53:01.353668 | orchestrator | Thursday 05 February 2026 00:51:31 +0000 (0:00:00.693) 0:04:31.735 ***** 2026-02-05 00:53:01.353677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 00:53:01.353686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 00:53:01.353695 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.353701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 00:53:01.353706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 00:53:01.353710 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.353715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 00:53:01.353724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 00:53:01.353730 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.353739 | orchestrator | 2026-02-05 00:53:01.353743 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-05 00:53:01.353748 | orchestrator | Thursday 05 February 2026 00:51:32 +0000 (0:00:00.931) 0:04:32.667 ***** 2026-02-05 00:53:01.353753 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.353758 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.353763 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.353767 | orchestrator | 2026-02-05 00:53:01.353772 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-05 00:53:01.353777 | orchestrator | Thursday 05 February 2026 00:51:34 +0000 (0:00:01.699) 0:04:34.366 ***** 2026-02-05 00:53:01.353782 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.353786 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.353791 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.353796 | orchestrator | 2026-02-05 00:53:01.353801 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-05 00:53:01.353805 | orchestrator | Thursday 05 February 2026 00:51:36 +0000 (0:00:02.147) 0:04:36.514 ***** 2026-02-05 00:53:01.353810 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.353815 | orchestrator | 2026-02-05 00:53:01.353820 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-05 00:53:01.353825 | orchestrator | Thursday 05 February 2026 00:51:37 +0000 (0:00:01.579) 0:04:38.093 ***** 2026-02-05 00:53:01.353830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:53:01.353897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:53:01.353908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:53:01.353923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:53:01.353929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:53:01.353950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:53:01.353957 | orchestrator | 2026-02-05 00:53:01.353962 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-05 00:53:01.353967 | orchestrator | Thursday 05 February 2026 00:51:42 +0000 (0:00:04.592) 0:04:42.686 ***** 2026-02-05 00:53:01.353972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:53:01.353984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:53:01.353990 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.353995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:53:01.354033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:53:01.354041 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.354046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:53:01.354057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:53:01.354062 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.354066 | orchestrator | 2026-02-05 00:53:01.354071 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-05 00:53:01.354076 | orchestrator | Thursday 05 February 2026 00:51:43 +0000 (0:00:01.046) 0:04:43.733 ***** 2026-02-05 00:53:01.354080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-05 00:53:01.354085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 00:53:01.354090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 00:53:01.354095 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.354100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-05 00:53:01.354104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 00:53:01.354109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 00:53:01.354114 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.354118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-05 00:53:01.354138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 00:53:01.354143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 00:53:01.354151 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.354156 | orchestrator | 2026-02-05 00:53:01.354161 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-05 00:53:01.354165 | orchestrator | Thursday 05 February 2026 00:51:44 +0000 (0:00:00.808) 0:04:44.542 ***** 2026-02-05 00:53:01.354170 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.354174 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.354179 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.354183 | orchestrator | 2026-02-05 00:53:01.354188 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-05 00:53:01.354192 | orchestrator | Thursday 05 February 2026 00:51:44 +0000 (0:00:00.395) 0:04:44.937 ***** 2026-02-05 00:53:01.354197 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.354201 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.354206 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.354210 | orchestrator | 2026-02-05 00:53:01.354215 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-05 00:53:01.354219 | orchestrator | Thursday 05 February 2026 00:51:45 +0000 (0:00:01.128) 0:04:46.066 ***** 2026-02-05 00:53:01.354224 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.354229 | orchestrator | 2026-02-05 00:53:01.354233 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-05 00:53:01.354238 | orchestrator | Thursday 05 February 2026 00:51:47 +0000 (0:00:01.587) 0:04:47.654 ***** 2026-02-05 00:53:01.354247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 00:53:01.354256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 00:53:01.354265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 00:53:01.354273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 00:53:01.354317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 00:53:01.354343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 00:53:01.354353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 00:53:01.354428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 00:53:01.354438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 00:53:01.354462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 00:53:01.354471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 00:53:01.354491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 00:53:01.354516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 00:53:01.354525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 00:53:01.354543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 00:53:01.354592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 00:53:01.354609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 00:53:01.354660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 00:53:01.354668 | orchestrator | 2026-02-05 00:53:01.354681 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-05 00:53:01.354690 | orchestrator | Thursday 05 February 2026 00:51:51 +0000 (0:00:03.645) 0:04:51.299 ***** 2026-02-05 00:53:01.354698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 00:53:01.354706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 00:53:01.354717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 00:53:01.354746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 00:53:01.354760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 00:53:01.354769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 00:53:01.354781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 00:53:01.354789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 00:53:01.354838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 00:53:01.354855 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.354866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 00:53:01.354879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 00:53:01.354886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 00:53:01.354907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 00:53:01.354926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 00:53:01.354934 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.354941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.354963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 00:53:01.354976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 00:53:01.354987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 00:53:01.355006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.355019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:53:01.355027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 00:53:01.355035 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.355042 | orchestrator | 2026-02-05 00:53:01.355050 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-05 00:53:01.355058 | orchestrator | Thursday 05 February 2026 00:51:51 +0000 (0:00:00.727) 0:04:52.027 ***** 2026-02-05 00:53:01.355065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-05 00:53:01.355074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-05 00:53:01.355082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 00:53:01.355091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 00:53:01.355099 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.355111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-05 00:53:01.355119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-05 00:53:01.355127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 00:53:01.355135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 00:53:01.355142 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.355150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-05 00:53:01.355158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-05 00:53:01.355176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 00:53:01.355184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 00:53:01.355192 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.355200 | orchestrator | 2026-02-05 00:53:01.355207 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-05 00:53:01.355215 | orchestrator | Thursday 05 February 2026 00:51:53 +0000 (0:00:01.632) 0:04:53.659 ***** 2026-02-05 00:53:01.355222 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.355229 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.355237 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.355245 | orchestrator | 2026-02-05 00:53:01.355252 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-05 00:53:01.355260 | orchestrator | Thursday 05 February 2026 00:51:53 +0000 (0:00:00.434) 0:04:54.094 ***** 2026-02-05 00:53:01.355267 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.355275 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.355282 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.355290 | orchestrator | 2026-02-05 00:53:01.355297 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-05 00:53:01.355305 | orchestrator | Thursday 05 February 2026 00:51:55 +0000 (0:00:01.297) 0:04:55.391 ***** 2026-02-05 00:53:01.355313 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.355320 | orchestrator | 2026-02-05 00:53:01.355327 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-05 00:53:01.355334 | orchestrator | Thursday 05 February 2026 00:51:56 +0000 (0:00:01.777) 0:04:57.168 ***** 2026-02-05 00:53:01.355348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:53:01.355358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:53:01.355376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:53:01.355385 | orchestrator | 2026-02-05 00:53:01.355393 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-05 00:53:01.355401 | orchestrator | Thursday 05 February 2026 00:51:58 +0000 (0:00:02.065) 0:04:59.234 ***** 2026-02-05 00:53:01.355409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 00:53:01.355417 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.355429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 00:53:01.355437 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.355445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 00:53:01.355457 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.355465 | orchestrator | 2026-02-05 00:53:01.355473 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-05 00:53:01.355480 | orchestrator | Thursday 05 February 2026 00:51:59 +0000 (0:00:00.378) 0:04:59.612 ***** 2026-02-05 00:53:01.355488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-05 00:53:01.355496 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.355506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-05 00:53:01.355514 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.355521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-05 00:53:01.355529 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.355537 | orchestrator | 2026-02-05 00:53:01.355544 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-05 00:53:01.355552 | orchestrator | Thursday 05 February 2026 00:51:59 +0000 (0:00:00.579) 0:05:00.192 ***** 2026-02-05 00:53:01.355560 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.355568 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.355576 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.355581 | orchestrator | 2026-02-05 00:53:01.355585 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-05 00:53:01.355590 | orchestrator | Thursday 05 February 2026 00:52:00 +0000 (0:00:00.665) 0:05:00.858 ***** 2026-02-05 00:53:01.355595 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.355599 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.355604 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.355608 | orchestrator | 2026-02-05 00:53:01.355613 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-05 00:53:01.355617 | orchestrator | Thursday 05 February 2026 00:52:01 +0000 (0:00:01.110) 0:05:01.968 ***** 2026-02-05 00:53:01.355622 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:53:01.355626 | orchestrator | 2026-02-05 00:53:01.355631 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-05 00:53:01.355635 | orchestrator | Thursday 05 February 2026 00:52:03 +0000 (0:00:01.420) 0:05:03.389 ***** 2026-02-05 00:53:01.355640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.355651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.355659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.355664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.355669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.355679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 00:53:01.355684 | orchestrator | 2026-02-05 00:53:01.355689 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-05 00:53:01.355693 | orchestrator | Thursday 05 February 2026 00:52:08 +0000 (0:00:05.439) 0:05:08.829 ***** 2026-02-05 00:53:01.355698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.355705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.355710 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.355714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.355725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.355734 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.355741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.355752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 00:53:01.355761 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.355769 | orchestrator | 2026-02-05 00:53:01.355777 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-05 00:53:01.355784 | orchestrator | Thursday 05 February 2026 00:52:09 +0000 (0:00:00.799) 0:05:09.629 ***** 2026-02-05 00:53:01.355791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 00:53:01.355800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 00:53:01.355808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 00:53:01.355822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 00:53:01.355829 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.355837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 00:53:01.355859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 00:53:01.355868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 00:53:01.355880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 00:53:01.355886 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.355890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 00:53:01.355895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 00:53:01.355900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 00:53:01.355905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 00:53:01.355909 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.355914 | orchestrator | 2026-02-05 00:53:01.355918 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-05 00:53:01.355923 | orchestrator | Thursday 05 February 2026 00:52:10 +0000 (0:00:00.867) 0:05:10.496 ***** 2026-02-05 00:53:01.355927 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.355932 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.355936 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.355941 | orchestrator | 2026-02-05 00:53:01.355946 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-05 00:53:01.355950 | orchestrator | Thursday 05 February 2026 00:52:11 +0000 (0:00:01.286) 0:05:11.783 ***** 2026-02-05 00:53:01.355955 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.355959 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.355967 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.355972 | orchestrator | 2026-02-05 00:53:01.355976 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-05 00:53:01.355981 | orchestrator | Thursday 05 February 2026 00:52:13 +0000 (0:00:02.136) 0:05:13.920 ***** 2026-02-05 00:53:01.355986 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.355990 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.355995 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.355999 | orchestrator | 2026-02-05 00:53:01.356004 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-05 00:53:01.356012 | orchestrator | Thursday 05 February 2026 00:52:14 +0000 (0:00:00.525) 0:05:14.445 ***** 2026-02-05 00:53:01.356017 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.356022 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.356026 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.356031 | orchestrator | 2026-02-05 00:53:01.356035 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-05 00:53:01.356041 | orchestrator | Thursday 05 February 2026 00:52:14 +0000 (0:00:00.276) 0:05:14.722 ***** 2026-02-05 00:53:01.356049 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.356057 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.356064 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.356072 | orchestrator | 2026-02-05 00:53:01.356079 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-05 00:53:01.356086 | orchestrator | Thursday 05 February 2026 00:52:14 +0000 (0:00:00.292) 0:05:15.015 ***** 2026-02-05 00:53:01.356093 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.356100 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.356107 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.356115 | orchestrator | 2026-02-05 00:53:01.356123 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-05 00:53:01.356131 | orchestrator | Thursday 05 February 2026 00:52:15 +0000 (0:00:00.290) 0:05:15.305 ***** 2026-02-05 00:53:01.356138 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.356147 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.356154 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.356162 | orchestrator | 2026-02-05 00:53:01.356171 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-05 00:53:01.356176 | orchestrator | Thursday 05 February 2026 00:52:15 +0000 (0:00:00.521) 0:05:15.826 ***** 2026-02-05 00:53:01.356180 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.356185 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.356189 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.356194 | orchestrator | 2026-02-05 00:53:01.356198 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-05 00:53:01.356203 | orchestrator | Thursday 05 February 2026 00:52:16 +0000 (0:00:00.474) 0:05:16.301 ***** 2026-02-05 00:53:01.356207 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.356212 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.356217 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.356221 | orchestrator | 2026-02-05 00:53:01.356226 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-05 00:53:01.356230 | orchestrator | Thursday 05 February 2026 00:52:16 +0000 (0:00:00.657) 0:05:16.959 ***** 2026-02-05 00:53:01.356235 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.356240 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.356244 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.356249 | orchestrator | 2026-02-05 00:53:01.356253 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-05 00:53:01.356258 | orchestrator | Thursday 05 February 2026 00:52:17 +0000 (0:00:00.335) 0:05:17.295 ***** 2026-02-05 00:53:01.356262 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.356267 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.356272 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.356276 | orchestrator | 2026-02-05 00:53:01.356284 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-05 00:53:01.356289 | orchestrator | Thursday 05 February 2026 00:52:18 +0000 (0:00:01.350) 0:05:18.645 ***** 2026-02-05 00:53:01.356293 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.356298 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.356302 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.356307 | orchestrator | 2026-02-05 00:53:01.356311 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-05 00:53:01.356316 | orchestrator | Thursday 05 February 2026 00:52:19 +0000 (0:00:00.859) 0:05:19.505 ***** 2026-02-05 00:53:01.356325 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.356329 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.356334 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.356338 | orchestrator | 2026-02-05 00:53:01.356343 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-05 00:53:01.356347 | orchestrator | Thursday 05 February 2026 00:52:20 +0000 (0:00:00.845) 0:05:20.350 ***** 2026-02-05 00:53:01.356352 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.356356 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.356361 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.356365 | orchestrator | 2026-02-05 00:53:01.356370 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-05 00:53:01.356374 | orchestrator | Thursday 05 February 2026 00:52:29 +0000 (0:00:09.807) 0:05:30.157 ***** 2026-02-05 00:53:01.356379 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.356383 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.356388 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.356392 | orchestrator | 2026-02-05 00:53:01.356397 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-05 00:53:01.356401 | orchestrator | Thursday 05 February 2026 00:52:30 +0000 (0:00:00.963) 0:05:31.120 ***** 2026-02-05 00:53:01.356406 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.356410 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.356415 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.356419 | orchestrator | 2026-02-05 00:53:01.356424 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-05 00:53:01.356428 | orchestrator | Thursday 05 February 2026 00:52:45 +0000 (0:00:14.525) 0:05:45.646 ***** 2026-02-05 00:53:01.356433 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.356437 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.356442 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.356446 | orchestrator | 2026-02-05 00:53:01.356456 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-05 00:53:01.356461 | orchestrator | Thursday 05 February 2026 00:52:46 +0000 (0:00:00.754) 0:05:46.400 ***** 2026-02-05 00:53:01.356465 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:53:01.356470 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:53:01.356474 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:53:01.356479 | orchestrator | 2026-02-05 00:53:01.356483 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-05 00:53:01.356488 | orchestrator | Thursday 05 February 2026 00:52:55 +0000 (0:00:09.270) 0:05:55.671 ***** 2026-02-05 00:53:01.356493 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.356497 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.356502 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.356506 | orchestrator | 2026-02-05 00:53:01.356511 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-05 00:53:01.356515 | orchestrator | Thursday 05 February 2026 00:52:55 +0000 (0:00:00.519) 0:05:56.190 ***** 2026-02-05 00:53:01.356520 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.356524 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.356529 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.356533 | orchestrator | 2026-02-05 00:53:01.356538 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-05 00:53:01.356542 | orchestrator | Thursday 05 February 2026 00:52:56 +0000 (0:00:00.299) 0:05:56.490 ***** 2026-02-05 00:53:01.356547 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.356551 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.356556 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.356560 | orchestrator | 2026-02-05 00:53:01.356565 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-05 00:53:01.356569 | orchestrator | Thursday 05 February 2026 00:52:56 +0000 (0:00:00.309) 0:05:56.799 ***** 2026-02-05 00:53:01.356574 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.356578 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.356586 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.356590 | orchestrator | 2026-02-05 00:53:01.356595 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-05 00:53:01.356599 | orchestrator | Thursday 05 February 2026 00:52:56 +0000 (0:00:00.299) 0:05:57.099 ***** 2026-02-05 00:53:01.356604 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.356608 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.356613 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.356617 | orchestrator | 2026-02-05 00:53:01.356622 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-05 00:53:01.356626 | orchestrator | Thursday 05 February 2026 00:52:57 +0000 (0:00:00.519) 0:05:57.619 ***** 2026-02-05 00:53:01.356631 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:53:01.356635 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:53:01.356640 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:53:01.356644 | orchestrator | 2026-02-05 00:53:01.356649 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-05 00:53:01.356653 | orchestrator | Thursday 05 February 2026 00:52:57 +0000 (0:00:00.318) 0:05:57.937 ***** 2026-02-05 00:53:01.356658 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.356662 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.356667 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.356671 | orchestrator | 2026-02-05 00:53:01.356676 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-05 00:53:01.356680 | orchestrator | Thursday 05 February 2026 00:52:58 +0000 (0:00:00.797) 0:05:58.734 ***** 2026-02-05 00:53:01.356685 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:53:01.356689 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:53:01.356694 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:53:01.356698 | orchestrator | 2026-02-05 00:53:01.356703 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:53:01.356712 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-05 00:53:01.356720 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-05 00:53:01.356728 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-05 00:53:01.356736 | orchestrator | 2026-02-05 00:53:01.356743 | orchestrator | 2026-02-05 00:53:01.356750 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:53:01.356758 | orchestrator | Thursday 05 February 2026 00:52:59 +0000 (0:00:00.743) 0:05:59.478 ***** 2026-02-05 00:53:01.356766 | orchestrator | =============================================================================== 2026-02-05 00:53:01.356773 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.53s 2026-02-05 00:53:01.356781 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.81s 2026-02-05 00:53:01.356788 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.27s 2026-02-05 00:53:01.356796 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.44s 2026-02-05 00:53:01.356804 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.90s 2026-02-05 00:53:01.356812 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.79s 2026-02-05 00:53:01.356820 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.76s 2026-02-05 00:53:01.356828 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.73s 2026-02-05 00:53:01.356833 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.59s 2026-02-05 00:53:01.356838 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.31s 2026-02-05 00:53:01.356882 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.24s 2026-02-05 00:53:01.356891 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.12s 2026-02-05 00:53:01.356896 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.90s 2026-02-05 00:53:01.356901 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.65s 2026-02-05 00:53:01.356905 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.64s 2026-02-05 00:53:01.356910 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.58s 2026-02-05 00:53:01.356914 | orchestrator | mariadb : Ensure mysql monitor user exist ------------------------------- 3.54s 2026-02-05 00:53:01.356919 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.51s 2026-02-05 00:53:01.356923 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.48s 2026-02-05 00:53:01.356928 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 3.47s 2026-02-05 00:53:01.356932 | orchestrator | 2026-02-05 00:53:01 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:01.356937 | orchestrator | 2026-02-05 00:53:01 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:01.356942 | orchestrator | 2026-02-05 00:53:01 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:01.356946 | orchestrator | 2026-02-05 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:04.380937 | orchestrator | 2026-02-05 00:53:04 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:04.382583 | orchestrator | 2026-02-05 00:53:04 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:04.384083 | orchestrator | 2026-02-05 00:53:04 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:04.384127 | orchestrator | 2026-02-05 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:07.421393 | orchestrator | 2026-02-05 00:53:07 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:07.425071 | orchestrator | 2026-02-05 00:53:07 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:07.425120 | orchestrator | 2026-02-05 00:53:07 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:07.425126 | orchestrator | 2026-02-05 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:10.456266 | orchestrator | 2026-02-05 00:53:10 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:10.456327 | orchestrator | 2026-02-05 00:53:10 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:10.457215 | orchestrator | 2026-02-05 00:53:10 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:10.457233 | orchestrator | 2026-02-05 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:13.482347 | orchestrator | 2026-02-05 00:53:13 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:13.485082 | orchestrator | 2026-02-05 00:53:13 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:13.485235 | orchestrator | 2026-02-05 00:53:13 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:13.485792 | orchestrator | 2026-02-05 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:16.528952 | orchestrator | 2026-02-05 00:53:16 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:16.529345 | orchestrator | 2026-02-05 00:53:16 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:16.530304 | orchestrator | 2026-02-05 00:53:16 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:16.530337 | orchestrator | 2026-02-05 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:19.555348 | orchestrator | 2026-02-05 00:53:19 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:19.555468 | orchestrator | 2026-02-05 00:53:19 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:19.556352 | orchestrator | 2026-02-05 00:53:19 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:19.556389 | orchestrator | 2026-02-05 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:22.579123 | orchestrator | 2026-02-05 00:53:22 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:22.579398 | orchestrator | 2026-02-05 00:53:22 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:22.580321 | orchestrator | 2026-02-05 00:53:22 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:22.580396 | orchestrator | 2026-02-05 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:25.612130 | orchestrator | 2026-02-05 00:53:25 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:25.612208 | orchestrator | 2026-02-05 00:53:25 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:25.612924 | orchestrator | 2026-02-05 00:53:25 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:25.612948 | orchestrator | 2026-02-05 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:28.642499 | orchestrator | 2026-02-05 00:53:28 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:28.644320 | orchestrator | 2026-02-05 00:53:28 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:28.647157 | orchestrator | 2026-02-05 00:53:28 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:28.647226 | orchestrator | 2026-02-05 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:31.680211 | orchestrator | 2026-02-05 00:53:31 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:31.680904 | orchestrator | 2026-02-05 00:53:31 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:31.681267 | orchestrator | 2026-02-05 00:53:31 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:31.681529 | orchestrator | 2026-02-05 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:34.723307 | orchestrator | 2026-02-05 00:53:34 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:34.724568 | orchestrator | 2026-02-05 00:53:34 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:34.726100 | orchestrator | 2026-02-05 00:53:34 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:34.726137 | orchestrator | 2026-02-05 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:37.765498 | orchestrator | 2026-02-05 00:53:37 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:37.767113 | orchestrator | 2026-02-05 00:53:37 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:37.768539 | orchestrator | 2026-02-05 00:53:37 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:37.768661 | orchestrator | 2026-02-05 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:40.809610 | orchestrator | 2026-02-05 00:53:40 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:40.813268 | orchestrator | 2026-02-05 00:53:40 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:40.814886 | orchestrator | 2026-02-05 00:53:40 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:40.814932 | orchestrator | 2026-02-05 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:43.852847 | orchestrator | 2026-02-05 00:53:43 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:43.856368 | orchestrator | 2026-02-05 00:53:43 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:43.858931 | orchestrator | 2026-02-05 00:53:43 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:43.859750 | orchestrator | 2026-02-05 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:46.912991 | orchestrator | 2026-02-05 00:53:46 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:46.915326 | orchestrator | 2026-02-05 00:53:46 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:46.918966 | orchestrator | 2026-02-05 00:53:46 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:46.919019 | orchestrator | 2026-02-05 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:49.967933 | orchestrator | 2026-02-05 00:53:49 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:49.972181 | orchestrator | 2026-02-05 00:53:49 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:49.974044 | orchestrator | 2026-02-05 00:53:49 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:49.974096 | orchestrator | 2026-02-05 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:53.026594 | orchestrator | 2026-02-05 00:53:53 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:53.030576 | orchestrator | 2026-02-05 00:53:53 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:53.032885 | orchestrator | 2026-02-05 00:53:53 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:53.032955 | orchestrator | 2026-02-05 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:56.080742 | orchestrator | 2026-02-05 00:53:56 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:56.082252 | orchestrator | 2026-02-05 00:53:56 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:56.083918 | orchestrator | 2026-02-05 00:53:56 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:56.083957 | orchestrator | 2026-02-05 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:59.133723 | orchestrator | 2026-02-05 00:53:59 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:53:59.135674 | orchestrator | 2026-02-05 00:53:59 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:53:59.137814 | orchestrator | 2026-02-05 00:53:59 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:53:59.137895 | orchestrator | 2026-02-05 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:02.186606 | orchestrator | 2026-02-05 00:54:02 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:02.189052 | orchestrator | 2026-02-05 00:54:02 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:02.189135 | orchestrator | 2026-02-05 00:54:02 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:02.189170 | orchestrator | 2026-02-05 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:05.232119 | orchestrator | 2026-02-05 00:54:05 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:05.233669 | orchestrator | 2026-02-05 00:54:05 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:05.235859 | orchestrator | 2026-02-05 00:54:05 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:05.236287 | orchestrator | 2026-02-05 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:08.294325 | orchestrator | 2026-02-05 00:54:08 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:08.296386 | orchestrator | 2026-02-05 00:54:08 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:08.299019 | orchestrator | 2026-02-05 00:54:08 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:08.299248 | orchestrator | 2026-02-05 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:11.334227 | orchestrator | 2026-02-05 00:54:11 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:11.336194 | orchestrator | 2026-02-05 00:54:11 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:11.337909 | orchestrator | 2026-02-05 00:54:11 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:11.337961 | orchestrator | 2026-02-05 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:14.379727 | orchestrator | 2026-02-05 00:54:14 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:14.381498 | orchestrator | 2026-02-05 00:54:14 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:14.382995 | orchestrator | 2026-02-05 00:54:14 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:14.383034 | orchestrator | 2026-02-05 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:17.429145 | orchestrator | 2026-02-05 00:54:17 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:17.429253 | orchestrator | 2026-02-05 00:54:17 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:17.430259 | orchestrator | 2026-02-05 00:54:17 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:17.431166 | orchestrator | 2026-02-05 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:20.473922 | orchestrator | 2026-02-05 00:54:20 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:20.476266 | orchestrator | 2026-02-05 00:54:20 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:20.478155 | orchestrator | 2026-02-05 00:54:20 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:20.478528 | orchestrator | 2026-02-05 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:23.513130 | orchestrator | 2026-02-05 00:54:23 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:23.515354 | orchestrator | 2026-02-05 00:54:23 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:23.517642 | orchestrator | 2026-02-05 00:54:23 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:23.517700 | orchestrator | 2026-02-05 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:26.547971 | orchestrator | 2026-02-05 00:54:26 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:26.548240 | orchestrator | 2026-02-05 00:54:26 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:26.549214 | orchestrator | 2026-02-05 00:54:26 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:26.549260 | orchestrator | 2026-02-05 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:29.591545 | orchestrator | 2026-02-05 00:54:29 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:29.591851 | orchestrator | 2026-02-05 00:54:29 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:29.592775 | orchestrator | 2026-02-05 00:54:29 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:29.592826 | orchestrator | 2026-02-05 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:32.623924 | orchestrator | 2026-02-05 00:54:32 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:32.624861 | orchestrator | 2026-02-05 00:54:32 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:32.626074 | orchestrator | 2026-02-05 00:54:32 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:32.626121 | orchestrator | 2026-02-05 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:35.668173 | orchestrator | 2026-02-05 00:54:35 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:35.668904 | orchestrator | 2026-02-05 00:54:35 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:35.670056 | orchestrator | 2026-02-05 00:54:35 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:35.670083 | orchestrator | 2026-02-05 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:38.714624 | orchestrator | 2026-02-05 00:54:38 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:38.718163 | orchestrator | 2026-02-05 00:54:38 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:38.719443 | orchestrator | 2026-02-05 00:54:38 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:38.719497 | orchestrator | 2026-02-05 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:41.769080 | orchestrator | 2026-02-05 00:54:41 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:41.772138 | orchestrator | 2026-02-05 00:54:41 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:41.774006 | orchestrator | 2026-02-05 00:54:41 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:41.774080 | orchestrator | 2026-02-05 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:44.814829 | orchestrator | 2026-02-05 00:54:44 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:44.818269 | orchestrator | 2026-02-05 00:54:44 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:44.821867 | orchestrator | 2026-02-05 00:54:44 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:44.821957 | orchestrator | 2026-02-05 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:47.864561 | orchestrator | 2026-02-05 00:54:47 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:47.866223 | orchestrator | 2026-02-05 00:54:47 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:47.867885 | orchestrator | 2026-02-05 00:54:47 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:47.867942 | orchestrator | 2026-02-05 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:50.915636 | orchestrator | 2026-02-05 00:54:50 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:50.917748 | orchestrator | 2026-02-05 00:54:50 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:50.919578 | orchestrator | 2026-02-05 00:54:50 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:50.919622 | orchestrator | 2026-02-05 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:53.971384 | orchestrator | 2026-02-05 00:54:53 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:53.973373 | orchestrator | 2026-02-05 00:54:53 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:53.975003 | orchestrator | 2026-02-05 00:54:53 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:53.975258 | orchestrator | 2026-02-05 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:57.026205 | orchestrator | 2026-02-05 00:54:57 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:54:57.028350 | orchestrator | 2026-02-05 00:54:57 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:54:57.030324 | orchestrator | 2026-02-05 00:54:57 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:54:57.030504 | orchestrator | 2026-02-05 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:00.078806 | orchestrator | 2026-02-05 00:55:00 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:00.081125 | orchestrator | 2026-02-05 00:55:00 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:00.082815 | orchestrator | 2026-02-05 00:55:00 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:55:00.082863 | orchestrator | 2026-02-05 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:03.133239 | orchestrator | 2026-02-05 00:55:03 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:03.134106 | orchestrator | 2026-02-05 00:55:03 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:03.135528 | orchestrator | 2026-02-05 00:55:03 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:55:03.135574 | orchestrator | 2026-02-05 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:06.185860 | orchestrator | 2026-02-05 00:55:06 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:06.187715 | orchestrator | 2026-02-05 00:55:06 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:06.189631 | orchestrator | 2026-02-05 00:55:06 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:55:06.189731 | orchestrator | 2026-02-05 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:09.235975 | orchestrator | 2026-02-05 00:55:09 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:09.238278 | orchestrator | 2026-02-05 00:55:09 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:09.240854 | orchestrator | 2026-02-05 00:55:09 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:55:09.240988 | orchestrator | 2026-02-05 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:12.291620 | orchestrator | 2026-02-05 00:55:12 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:12.293490 | orchestrator | 2026-02-05 00:55:12 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:12.295855 | orchestrator | 2026-02-05 00:55:12 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:55:12.295900 | orchestrator | 2026-02-05 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:15.335121 | orchestrator | 2026-02-05 00:55:15 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:15.336788 | orchestrator | 2026-02-05 00:55:15 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:15.338142 | orchestrator | 2026-02-05 00:55:15 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:55:15.338177 | orchestrator | 2026-02-05 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:18.374882 | orchestrator | 2026-02-05 00:55:18 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:18.374969 | orchestrator | 2026-02-05 00:55:18 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:18.376247 | orchestrator | 2026-02-05 00:55:18 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:55:18.376292 | orchestrator | 2026-02-05 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:21.416513 | orchestrator | 2026-02-05 00:55:21 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:21.419005 | orchestrator | 2026-02-05 00:55:21 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:21.421471 | orchestrator | 2026-02-05 00:55:21 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:55:21.421587 | orchestrator | 2026-02-05 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:24.462163 | orchestrator | 2026-02-05 00:55:24 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:24.462791 | orchestrator | 2026-02-05 00:55:24 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:24.465969 | orchestrator | 2026-02-05 00:55:24 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:55:24.467101 | orchestrator | 2026-02-05 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:27.516095 | orchestrator | 2026-02-05 00:55:27 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:27.517881 | orchestrator | 2026-02-05 00:55:27 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:27.520119 | orchestrator | 2026-02-05 00:55:27 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state STARTED 2026-02-05 00:55:27.520166 | orchestrator | 2026-02-05 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:30.568364 | orchestrator | 2026-02-05 00:55:30 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:30.570304 | orchestrator | 2026-02-05 00:55:30 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:30.571408 | orchestrator | 2026-02-05 00:55:30 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:55:30.576581 | orchestrator | 2026-02-05 00:55:30 | INFO  | Task 0f43467e-ca75-4338-8688-10edc6f003da is in state SUCCESS 2026-02-05 00:55:30.578574 | orchestrator | 2026-02-05 00:55:30.578642 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 00:55:30.578694 | orchestrator | 2.16.14 2026-02-05 00:55:30.578769 | orchestrator | 2026-02-05 00:55:30.578777 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-05 00:55:30.578784 | orchestrator | 2026-02-05 00:55:30.578790 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 00:55:30.578823 | orchestrator | Thursday 05 February 2026 00:44:46 +0000 (0:00:01.059) 0:00:01.059 ***** 2026-02-05 00:55:30.578832 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.578839 | orchestrator | 2026-02-05 00:55:30.578846 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 00:55:30.578853 | orchestrator | Thursday 05 February 2026 00:44:47 +0000 (0:00:01.124) 0:00:02.184 ***** 2026-02-05 00:55:30.578860 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.578866 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.578872 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.578878 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.578884 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.578890 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.578896 | orchestrator | 2026-02-05 00:55:30.578903 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 00:55:30.578909 | orchestrator | Thursday 05 February 2026 00:44:48 +0000 (0:00:01.522) 0:00:03.706 ***** 2026-02-05 00:55:30.578945 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.578953 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.578978 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.579021 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.579068 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.579092 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.579133 | orchestrator | 2026-02-05 00:55:30.579142 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 00:55:30.579149 | orchestrator | Thursday 05 February 2026 00:44:49 +0000 (0:00:00.712) 0:00:04.419 ***** 2026-02-05 00:55:30.579169 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.579195 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.579202 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.579208 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.579213 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.579219 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.579224 | orchestrator | 2026-02-05 00:55:30.579230 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 00:55:30.579235 | orchestrator | Thursday 05 February 2026 00:44:50 +0000 (0:00:00.882) 0:00:05.301 ***** 2026-02-05 00:55:30.579241 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.579247 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.579252 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.579258 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.579287 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.579294 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.579299 | orchestrator | 2026-02-05 00:55:30.579305 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 00:55:30.579311 | orchestrator | Thursday 05 February 2026 00:44:51 +0000 (0:00:00.985) 0:00:06.287 ***** 2026-02-05 00:55:30.579317 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.579323 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.579329 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.579335 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.579341 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.579365 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.579373 | orchestrator | 2026-02-05 00:55:30.579379 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 00:55:30.579399 | orchestrator | Thursday 05 February 2026 00:44:52 +0000 (0:00:00.740) 0:00:07.027 ***** 2026-02-05 00:55:30.579405 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.579411 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.579445 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.579452 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.579484 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.579491 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.579496 | orchestrator | 2026-02-05 00:55:30.579502 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 00:55:30.579509 | orchestrator | Thursday 05 February 2026 00:44:52 +0000 (0:00:00.857) 0:00:07.885 ***** 2026-02-05 00:55:30.579514 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.579521 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.579527 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.579534 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.579541 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.579548 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.579554 | orchestrator | 2026-02-05 00:55:30.579560 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 00:55:30.579566 | orchestrator | Thursday 05 February 2026 00:44:53 +0000 (0:00:00.712) 0:00:08.597 ***** 2026-02-05 00:55:30.579573 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.579579 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.579585 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.579591 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.579598 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.579605 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.579611 | orchestrator | 2026-02-05 00:55:30.579640 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 00:55:30.579815 | orchestrator | Thursday 05 February 2026 00:44:54 +0000 (0:00:01.178) 0:00:09.776 ***** 2026-02-05 00:55:30.579835 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:55:30.579840 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:55:30.579844 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:55:30.579848 | orchestrator | 2026-02-05 00:55:30.579852 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 00:55:30.579856 | orchestrator | Thursday 05 February 2026 00:44:55 +0000 (0:00:00.992) 0:00:10.769 ***** 2026-02-05 00:55:30.579860 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.579864 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.579868 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.579885 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.579890 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.579893 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.579897 | orchestrator | 2026-02-05 00:55:30.579901 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 00:55:30.579905 | orchestrator | Thursday 05 February 2026 00:44:57 +0000 (0:00:01.530) 0:00:12.299 ***** 2026-02-05 00:55:30.579918 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:55:30.579922 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:55:30.579926 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:55:30.579930 | orchestrator | 2026-02-05 00:55:30.579933 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 00:55:30.579937 | orchestrator | Thursday 05 February 2026 00:44:59 +0000 (0:00:02.344) 0:00:14.644 ***** 2026-02-05 00:55:30.579941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 00:55:30.579946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 00:55:30.579949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 00:55:30.579953 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.579957 | orchestrator | 2026-02-05 00:55:30.579961 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 00:55:30.579971 | orchestrator | Thursday 05 February 2026 00:45:00 +0000 (0:00:00.659) 0:00:15.304 ***** 2026-02-05 00:55:30.579977 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.579984 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.579988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.579992 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.579995 | orchestrator | 2026-02-05 00:55:30.579999 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 00:55:30.580021 | orchestrator | Thursday 05 February 2026 00:45:01 +0000 (0:00:00.810) 0:00:16.114 ***** 2026-02-05 00:55:30.580027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.580033 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.580037 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.580041 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580045 | orchestrator | 2026-02-05 00:55:30.580049 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 00:55:30.580053 | orchestrator | Thursday 05 February 2026 00:45:01 +0000 (0:00:00.428) 0:00:16.543 ***** 2026-02-05 00:55:30.580062 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 00:44:57.896569', 'end': '2026-02-05 00:44:58.009635', 'delta': '0:00:00.113066', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.580073 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 00:44:58.727499', 'end': '2026-02-05 00:44:58.832233', 'delta': '0:00:00.104734', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.580081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 00:44:59.350703', 'end': '2026-02-05 00:44:59.466709', 'delta': '0:00:00.116006', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.580204 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580210 | orchestrator | 2026-02-05 00:55:30.580213 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 00:55:30.580217 | orchestrator | Thursday 05 February 2026 00:45:02 +0000 (0:00:00.679) 0:00:17.222 ***** 2026-02-05 00:55:30.580221 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.580225 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.580229 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.580232 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.580236 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.580240 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.580244 | orchestrator | 2026-02-05 00:55:30.580248 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 00:55:30.580252 | orchestrator | Thursday 05 February 2026 00:45:03 +0000 (0:00:01.473) 0:00:18.695 ***** 2026-02-05 00:55:30.580256 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:55:30.580259 | orchestrator | 2026-02-05 00:55:30.580318 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 00:55:30.580323 | orchestrator | Thursday 05 February 2026 00:45:04 +0000 (0:00:00.660) 0:00:19.356 ***** 2026-02-05 00:55:30.580327 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580330 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.580334 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.580338 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.580342 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.580346 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.580349 | orchestrator | 2026-02-05 00:55:30.580353 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 00:55:30.580361 | orchestrator | Thursday 05 February 2026 00:45:05 +0000 (0:00:01.372) 0:00:20.728 ***** 2026-02-05 00:55:30.580364 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580368 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.580372 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.580376 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.580380 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.580383 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.580387 | orchestrator | 2026-02-05 00:55:30.580391 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 00:55:30.580395 | orchestrator | Thursday 05 February 2026 00:45:08 +0000 (0:00:02.997) 0:00:23.725 ***** 2026-02-05 00:55:30.580399 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580402 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.580406 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.580410 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.580414 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.580417 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.580421 | orchestrator | 2026-02-05 00:55:30.580425 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 00:55:30.580429 | orchestrator | Thursday 05 February 2026 00:45:10 +0000 (0:00:01.344) 0:00:25.070 ***** 2026-02-05 00:55:30.580433 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580437 | orchestrator | 2026-02-05 00:55:30.580441 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 00:55:30.580445 | orchestrator | Thursday 05 February 2026 00:45:10 +0000 (0:00:00.351) 0:00:25.421 ***** 2026-02-05 00:55:30.580448 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580452 | orchestrator | 2026-02-05 00:55:30.580456 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 00:55:30.580460 | orchestrator | Thursday 05 February 2026 00:45:10 +0000 (0:00:00.223) 0:00:25.645 ***** 2026-02-05 00:55:30.580464 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580467 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.580471 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.580479 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.580483 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.580486 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.580490 | orchestrator | 2026-02-05 00:55:30.580494 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 00:55:30.580498 | orchestrator | Thursday 05 February 2026 00:45:11 +0000 (0:00:00.605) 0:00:26.250 ***** 2026-02-05 00:55:30.580502 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580505 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.580556 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.580561 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.580565 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.580568 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.580572 | orchestrator | 2026-02-05 00:55:30.580576 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 00:55:30.580580 | orchestrator | Thursday 05 February 2026 00:45:11 +0000 (0:00:00.757) 0:00:27.008 ***** 2026-02-05 00:55:30.580583 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580587 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.580591 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.580595 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.580598 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.580602 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.580606 | orchestrator | 2026-02-05 00:55:30.580610 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 00:55:30.580614 | orchestrator | Thursday 05 February 2026 00:45:12 +0000 (0:00:00.552) 0:00:27.561 ***** 2026-02-05 00:55:30.580621 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580625 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.580633 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.580637 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.580641 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.580663 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.580669 | orchestrator | 2026-02-05 00:55:30.580675 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 00:55:30.580681 | orchestrator | Thursday 05 February 2026 00:45:13 +0000 (0:00:00.944) 0:00:28.505 ***** 2026-02-05 00:55:30.580686 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580692 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.580697 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.580703 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.580709 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.580716 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.580721 | orchestrator | 2026-02-05 00:55:30.580725 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 00:55:30.580729 | orchestrator | Thursday 05 February 2026 00:45:14 +0000 (0:00:00.973) 0:00:29.479 ***** 2026-02-05 00:55:30.580733 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580736 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.580755 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.580759 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.580763 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.580766 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.580770 | orchestrator | 2026-02-05 00:55:30.580774 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 00:55:30.580778 | orchestrator | Thursday 05 February 2026 00:45:15 +0000 (0:00:00.909) 0:00:30.388 ***** 2026-02-05 00:55:30.580781 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.580796 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.580801 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.580815 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.580819 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.580823 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.580827 | orchestrator | 2026-02-05 00:55:30.580831 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 00:55:30.580834 | orchestrator | Thursday 05 February 2026 00:45:15 +0000 (0:00:00.550) 0:00:30.939 ***** 2026-02-05 00:55:30.580863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3e842383--5890--511f--b982--bff6d8042060-osd--block--3e842383--5890--511f--b982--bff6d8042060', 'dm-uuid-LVM-feYzPNgm7J2XpMW7Ydk9y2b5fFw5ZIRRIiotHRlVte350u57D33HOu7VVPdb83XH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--22ded513--57d8--573e--a796--c8381d672537-osd--block--22ded513--57d8--573e--a796--c8381d672537', 'dm-uuid-LVM-uFzvBTKpmUAt8VIYysGz41q3AIABZs8JooEhyqZtqHh2f1cnjHkA9h5UPVxA9fNA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--159372f8--6c52--51f3--a9af--3fbf7ffb45fe-osd--block--159372f8--6c52--51f3--a9af--3fbf7ffb45fe', 'dm-uuid-LVM-QpOrriM4HXirfF1rs1OzVygGinYcii5FYBFOD50VyFUpoK8Z5nC1vL3lA4GQepGI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--523b4628--8322--5ebe--8cc3--60a2eeaa41a5-osd--block--523b4628--8322--5ebe--8cc3--60a2eeaa41a5', 'dm-uuid-LVM-gthMeB4bH1NEmx1lNJOfN6HdjDxRPaoOR3G4GdZjZCGPLbrkG1n1uydrTmelXG1F'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.580975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3e842383--5890--511f--b982--bff6d8042060-osd--block--3e842383--5890--511f--b982--bff6d8042060'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CXZdc2-YWxI-0CJn-7isE-dwsd-qDH3-XuWeVU', 'scsi-0QEMU_QEMU_HARDDISK_d601120f-cbb3-4953-a30b-917ccea713c0', 'scsi-SQEMU_QEMU_HARDDISK_d601120f-cbb3-4953-a30b-917ccea713c0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.580991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.580995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--22ded513--57d8--573e--a796--c8381d672537-osd--block--22ded513--57d8--573e--a796--c8381d672537'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LrQnMQ-DAfw-CZQO-oCUP-6BZ2-It7W-7UQ90E', 'scsi-0QEMU_QEMU_HARDDISK_0f4e2151-cc71-4085-93f0-18395b8a78d9', 'scsi-SQEMU_QEMU_HARDDISK_0f4e2151-cc71-4085-93f0-18395b8a78d9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.581000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6da1746-b16d-4279-a6c0-a95c954f705d', 'scsi-SQEMU_QEMU_HARDDISK_e6da1746-b16d-4279-a6c0-a95c954f705d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.581005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3edfc207--63bb--5e8f--b635--306c655bc02c-osd--block--3edfc207--63bb--5e8f--b635--306c655bc02c', 'dm-uuid-LVM-MUghBa6PcrCydaFvfG0TUOZ9glQ5zyP1N3lbXM9MZ3ncFyWh0RzPsE3Ya86hIsTB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--121c279b--9e45--54e8--9359--e1d452607edd-osd--block--121c279b--9e45--54e8--9359--e1d452607edd', 'dm-uuid-LVM-KAxqEhc8qSlu2zzfQu7TSpQJv2qMiOvrq391tuAhjZUKzI0s1g6oimAMe8Junomx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.581032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part1', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part14', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part15', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part16', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.581040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--159372f8--6c52--51f3--a9af--3fbf7ffb45fe-osd--block--159372f8--6c52--51f3--a9af--3fbf7ffb45fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QLHj76-cRd3-Fq33-c8yU-BNkP-1i3U-MwSVlt', 'scsi-0QEMU_QEMU_HARDDISK_c8222ed3-0da2-4bb4-b170-21b6f36ecb8d', 'scsi-SQEMU_QEMU_HARDDISK_c8222ed3-0da2-4bb4-b170-21b6f36ecb8d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.581823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--523b4628--8322--5ebe--8cc3--60a2eeaa41a5-osd--block--523b4628--8322--5ebe--8cc3--60a2eeaa41a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-L3DBsr-6x7z-Ycn5-m8Xw-A3yt-yMAu-IATD8q', 'scsi-0QEMU_QEMU_HARDDISK_b7f472c8-b527-47c9-ac56-62f6f3e84fbf', 'scsi-SQEMU_QEMU_HARDDISK_b7f472c8-b527-47c9-ac56-62f6f3e84fbf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.581831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3293e5b-f1f9-462e-9781-4b1b679aef30', 'scsi-SQEMU_QEMU_HARDDISK_a3293e5b-f1f9-462e-9781-4b1b679aef30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.581848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581852 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.581894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.581900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.581948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3edfc207--63bb--5e8f--b635--306c655bc02c-osd--block--3edfc207--63bb--5e8f--b635--306c655bc02c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5YL8UG-yANo-1ams-B1Eb-5Hxa-zRuW-Qi1SZF', 'scsi-0QEMU_QEMU_HARDDISK_9acd2af8-1818-4377-bd1d-628102e352cb', 'scsi-SQEMU_QEMU_HARDDISK_9acd2af8-1818-4377-bd1d-628102e352cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.581965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--121c279b--9e45--54e8--9359--e1d452607edd-osd--block--121c279b--9e45--54e8--9359--e1d452607edd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Su7pbR-s8e6-pX8N-XCbj-4Jul-MEPJ-wrAfd4', 'scsi-0QEMU_QEMU_HARDDISK_33f37d33-b22b-44c3-8624-6074b4bf08c3', 'scsi-SQEMU_QEMU_HARDDISK_33f37d33-b22b-44c3-8624-6074b4bf08c3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.581980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.581984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f67b6e9-f99c-4354-902d-31e3a3988722', 'scsi-SQEMU_QEMU_HARDDISK_7f67b6e9-f99c-4354-902d-31e3a3988722'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.582007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.582048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582061 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.582065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19', 'scsi-SQEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part1', 'scsi-SQEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part14', 'scsi-SQEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part15', 'scsi-SQEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part16', 'scsi-SQEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.582112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.582122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582211 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.582253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f', 'scsi-SQEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.582260 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.582264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.582273 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.582286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:55:30.582419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f', 'scsi-SQEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.582454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:55:30.582460 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.582464 | orchestrator | 2026-02-05 00:55:30.582468 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 00:55:30.582472 | orchestrator | Thursday 05 February 2026 00:45:17 +0000 (0:00:01.604) 0:00:32.544 ***** 2026-02-05 00:55:30.582480 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3e842383--5890--511f--b982--bff6d8042060-osd--block--3e842383--5890--511f--b982--bff6d8042060', 'dm-uuid-LVM-feYzPNgm7J2XpMW7Ydk9y2b5fFw5ZIRRIiotHRlVte350u57D33HOu7VVPdb83XH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582486 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--22ded513--57d8--573e--a796--c8381d672537-osd--block--22ded513--57d8--573e--a796--c8381d672537', 'dm-uuid-LVM-uFzvBTKpmUAt8VIYysGz41q3AIABZs8JooEhyqZtqHh2f1cnjHkA9h5UPVxA9fNA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582499 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582508 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582591 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582606 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582614 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582619 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--159372f8--6c52--51f3--a9af--3fbf7ffb45fe-osd--block--159372f8--6c52--51f3--a9af--3fbf7ffb45fe', 'dm-uuid-LVM-QpOrriM4HXirfF1rs1OzVygGinYcii5FYBFOD50VyFUpoK8Z5nC1vL3lA4GQepGI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582623 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582676 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--523b4628--8322--5ebe--8cc3--60a2eeaa41a5-osd--block--523b4628--8322--5ebe--8cc3--60a2eeaa41a5', 'dm-uuid-LVM-gthMeB4bH1NEmx1lNJOfN6HdjDxRPaoOR3G4GdZjZCGPLbrkG1n1uydrTmelXG1F'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582697 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582701 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582732 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3e842383--5890--511f--b982--bff6d8042060-osd--block--3e842383--5890--511f--b982--bff6d8042060'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CXZdc2-YWxI-0CJn-7isE-dwsd-qDH3-XuWeVU', 'scsi-0QEMU_QEMU_HARDDISK_d601120f-cbb3-4953-a30b-917ccea713c0', 'scsi-SQEMU_QEMU_HARDDISK_d601120f-cbb3-4953-a30b-917ccea713c0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582742 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582750 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--22ded513--57d8--573e--a796--c8381d672537-osd--block--22ded513--57d8--573e--a796--c8381d672537'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LrQnMQ-DAfw-CZQO-oCUP-6BZ2-It7W-7UQ90E', 'scsi-0QEMU_QEMU_HARDDISK_0f4e2151-cc71-4085-93f0-18395b8a78d9', 'scsi-SQEMU_QEMU_HARDDISK_0f4e2151-cc71-4085-93f0-18395b8a78d9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582754 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582759 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6da1746-b16d-4279-a6c0-a95c954f705d', 'scsi-SQEMU_QEMU_HARDDISK_e6da1746-b16d-4279-a6c0-a95c954f705d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582791 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582797 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3edfc207--63bb--5e8f--b635--306c655bc02c-osd--block--3edfc207--63bb--5e8f--b635--306c655bc02c', 'dm-uuid-LVM-MUghBa6PcrCydaFvfG0TUOZ9glQ5zyP1N3lbXM9MZ3ncFyWh0RzPsE3Ya86hIsTB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582815 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--121c279b--9e45--54e8--9359--e1d452607edd-osd--block--121c279b--9e45--54e8--9359--e1d452607edd', 'dm-uuid-LVM-KAxqEhc8qSlu2zzfQu7TSpQJv2qMiOvrq391tuAhjZUKzI0s1g6oimAMe8Junomx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582819 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582823 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582845 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582850 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582856 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582913 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582918 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582955 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part1', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part14', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part15', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part16', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582969 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582974 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--159372f8--6c52--51f3--a9af--3fbf7ffb45fe-osd--block--159372f8--6c52--51f3--a9af--3fbf7ffb45fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QLHj76-cRd3-Fq33-c8yU-BNkP-1i3U-MwSVlt', 'scsi-0QEMU_QEMU_HARDDISK_c8222ed3-0da2-4bb4-b170-21b6f36ecb8d', 'scsi-SQEMU_QEMU_HARDDISK_c8222ed3-0da2-4bb4-b170-21b6f36ecb8d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582978 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.582982 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.582986 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--523b4628--8322--5ebe--8cc3--60a2eeaa41a5-osd--block--523b4628--8322--5ebe--8cc3--60a2eeaa41a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-L3DBsr-6x7z-Ycn5-m8Xw-A3yt-yMAu-IATD8q', 'scsi-0QEMU_QEMU_HARDDISK_b7f472c8-b527-47c9-ac56-62f6f3e84fbf', 'scsi-SQEMU_QEMU_HARDDISK_b7f472c8-b527-47c9-ac56-62f6f3e84fbf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583031 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583039 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3293e5b-f1f9-462e-9781-4b1b679aef30', 'scsi-SQEMU_QEMU_HARDDISK_a3293e5b-f1f9-462e-9781-4b1b679aef30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583075 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583081 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583092 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3edfc207--63bb--5e8f--b635--306c655bc02c-osd--block--3edfc207--63bb--5e8f--b635--306c655bc02c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5YL8UG-yANo-1ams-B1Eb-5Hxa-zRuW-Qi1SZF', 'scsi-0QEMU_QEMU_HARDDISK_9acd2af8-1818-4377-bd1d-628102e352cb', 'scsi-SQEMU_QEMU_HARDDISK_9acd2af8-1818-4377-bd1d-628102e352cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583102 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583106 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--121c279b--9e45--54e8--9359--e1d452607edd-osd--block--121c279b--9e45--54e8--9359--e1d452607edd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Su7pbR-s8e6-pX8N-XCbj-4Jul-MEPJ-wrAfd4', 'scsi-0QEMU_QEMU_HARDDISK_33f37d33-b22b-44c3-8624-6074b4bf08c3', 'scsi-SQEMU_QEMU_HARDDISK_33f37d33-b22b-44c3-8624-6074b4bf08c3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583110 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583147 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583159 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f67b6e9-f99c-4354-902d-31e3a3988722', 'scsi-SQEMU_QEMU_HARDDISK_7f67b6e9-f99c-4354-902d-31e3a3988722'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583164 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583168 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583172 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583176 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583212 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583224 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583229 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19', 'scsi-SQEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part1', 'scsi-SQEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part14', 'scsi-SQEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part15', 'scsi-SQEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part16', 'scsi-SQEMU_QEMU_HARDDISK_d26fcf07-a835-4e26-a700-2c8fd3601c19-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583234 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.583269 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583278 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583287 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583291 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583295 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583299 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.583303 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583307 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.583311 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583343 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583360 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583365 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583369 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f', 'scsi-SQEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae8576b0-3518-4bda-8316-c370e1678e8f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583404 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583418 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583422 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583426 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583430 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583434 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583442 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.583472 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583477 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583484 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f', 'scsi-SQEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e93b330-8072-4b50-a022-e1b5f3f4b47f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583494 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:55:30.583503 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.583507 | orchestrator | 2026-02-05 00:55:30.583537 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 00:55:30.583543 | orchestrator | Thursday 05 February 2026 00:45:18 +0000 (0:00:00.857) 0:00:33.401 ***** 2026-02-05 00:55:30.583547 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.583552 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.583556 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.583559 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.583563 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.583567 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.583571 | orchestrator | 2026-02-05 00:55:30.583575 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 00:55:30.583579 | orchestrator | Thursday 05 February 2026 00:45:19 +0000 (0:00:01.015) 0:00:34.417 ***** 2026-02-05 00:55:30.583583 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.583587 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.583590 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.583594 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.583598 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.583602 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.583606 | orchestrator | 2026-02-05 00:55:30.583610 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 00:55:30.583614 | orchestrator | Thursday 05 February 2026 00:45:20 +0000 (0:00:00.710) 0:00:35.127 ***** 2026-02-05 00:55:30.583618 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.583628 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.583632 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.583636 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.583639 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.583691 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.583696 | orchestrator | 2026-02-05 00:55:30.583700 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 00:55:30.583704 | orchestrator | Thursday 05 February 2026 00:45:20 +0000 (0:00:00.838) 0:00:35.966 ***** 2026-02-05 00:55:30.583707 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.583711 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.583720 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.583724 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.583728 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.583732 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.583735 | orchestrator | 2026-02-05 00:55:30.583739 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 00:55:30.583743 | orchestrator | Thursday 05 February 2026 00:45:21 +0000 (0:00:00.700) 0:00:36.668 ***** 2026-02-05 00:55:30.583747 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.583751 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.583755 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.583758 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.583762 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.583766 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.583770 | orchestrator | 2026-02-05 00:55:30.583773 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 00:55:30.583777 | orchestrator | Thursday 05 February 2026 00:45:22 +0000 (0:00:00.941) 0:00:37.609 ***** 2026-02-05 00:55:30.583781 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.583790 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.583798 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.583802 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.583805 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.583809 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.583813 | orchestrator | 2026-02-05 00:55:30.583817 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 00:55:30.583821 | orchestrator | Thursday 05 February 2026 00:45:23 +0000 (0:00:00.556) 0:00:38.165 ***** 2026-02-05 00:55:30.583824 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-05 00:55:30.583828 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-05 00:55:30.583832 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-05 00:55:30.583836 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-05 00:55:30.583840 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 00:55:30.583843 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-05 00:55:30.583847 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-05 00:55:30.583851 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-05 00:55:30.583854 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-05 00:55:30.583858 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-05 00:55:30.583862 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-05 00:55:30.583866 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-05 00:55:30.583869 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 00:55:30.583873 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-05 00:55:30.583877 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-05 00:55:30.583880 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-05 00:55:30.583884 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-05 00:55:30.583888 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 00:55:30.583891 | orchestrator | 2026-02-05 00:55:30.583895 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 00:55:30.583899 | orchestrator | Thursday 05 February 2026 00:45:26 +0000 (0:00:02.948) 0:00:41.113 ***** 2026-02-05 00:55:30.583903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 00:55:30.583907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 00:55:30.583911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 00:55:30.583914 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.583918 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 00:55:30.583922 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 00:55:30.583925 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 00:55:30.583929 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.583933 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 00:55:30.583952 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 00:55:30.583956 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 00:55:30.583960 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.583964 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 00:55:30.583968 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 00:55:30.583972 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 00:55:30.583976 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.583979 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 00:55:30.583983 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 00:55:30.583987 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 00:55:30.583991 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.583995 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 00:55:30.584006 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 00:55:30.584010 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 00:55:30.584014 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.584018 | orchestrator | 2026-02-05 00:55:30.584022 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 00:55:30.584026 | orchestrator | Thursday 05 February 2026 00:45:27 +0000 (0:00:01.226) 0:00:42.339 ***** 2026-02-05 00:55:30.584032 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.584036 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.584040 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.584044 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.584048 | orchestrator | 2026-02-05 00:55:30.584052 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 00:55:30.584056 | orchestrator | Thursday 05 February 2026 00:45:28 +0000 (0:00:01.128) 0:00:43.468 ***** 2026-02-05 00:55:30.584060 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.584064 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.584067 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.584071 | orchestrator | 2026-02-05 00:55:30.584075 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 00:55:30.584078 | orchestrator | Thursday 05 February 2026 00:45:28 +0000 (0:00:00.427) 0:00:43.896 ***** 2026-02-05 00:55:30.584082 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.584086 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.584089 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.584093 | orchestrator | 2026-02-05 00:55:30.584097 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 00:55:30.584100 | orchestrator | Thursday 05 February 2026 00:45:29 +0000 (0:00:00.444) 0:00:44.340 ***** 2026-02-05 00:55:30.584104 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.584108 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.584112 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.584115 | orchestrator | 2026-02-05 00:55:30.584119 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 00:55:30.584123 | orchestrator | Thursday 05 February 2026 00:45:29 +0000 (0:00:00.583) 0:00:44.924 ***** 2026-02-05 00:55:30.584126 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.584130 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.584134 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.584137 | orchestrator | 2026-02-05 00:55:30.584141 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 00:55:30.584145 | orchestrator | Thursday 05 February 2026 00:45:30 +0000 (0:00:00.516) 0:00:45.441 ***** 2026-02-05 00:55:30.584148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.584152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.584156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.584160 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.584163 | orchestrator | 2026-02-05 00:55:30.584167 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 00:55:30.584171 | orchestrator | Thursday 05 February 2026 00:45:30 +0000 (0:00:00.528) 0:00:45.969 ***** 2026-02-05 00:55:30.584174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.584178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.584183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.584187 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.584192 | orchestrator | 2026-02-05 00:55:30.584196 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 00:55:30.584204 | orchestrator | Thursday 05 February 2026 00:45:31 +0000 (0:00:00.429) 0:00:46.399 ***** 2026-02-05 00:55:30.584209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.584213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.584218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.584222 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.584226 | orchestrator | 2026-02-05 00:55:30.584231 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 00:55:30.584235 | orchestrator | Thursday 05 February 2026 00:45:32 +0000 (0:00:00.844) 0:00:47.243 ***** 2026-02-05 00:55:30.584239 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.584244 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.584248 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.584253 | orchestrator | 2026-02-05 00:55:30.584257 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 00:55:30.584261 | orchestrator | Thursday 05 February 2026 00:45:32 +0000 (0:00:00.487) 0:00:47.730 ***** 2026-02-05 00:55:30.584266 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 00:55:30.584270 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 00:55:30.584287 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 00:55:30.584292 | orchestrator | 2026-02-05 00:55:30.584296 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 00:55:30.584301 | orchestrator | Thursday 05 February 2026 00:45:33 +0000 (0:00:00.924) 0:00:48.655 ***** 2026-02-05 00:55:30.584305 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:55:30.584309 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:55:30.584313 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:55:30.584318 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 00:55:30.584323 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 00:55:30.584327 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 00:55:30.584331 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 00:55:30.584335 | orchestrator | 2026-02-05 00:55:30.584340 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 00:55:30.584344 | orchestrator | Thursday 05 February 2026 00:45:34 +0000 (0:00:01.271) 0:00:49.927 ***** 2026-02-05 00:55:30.584351 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:55:30.584356 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:55:30.584360 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:55:30.584364 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 00:55:30.584369 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 00:55:30.584374 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 00:55:30.584378 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 00:55:30.584382 | orchestrator | 2026-02-05 00:55:30.584387 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:55:30.584391 | orchestrator | Thursday 05 February 2026 00:45:36 +0000 (0:00:01.905) 0:00:51.832 ***** 2026-02-05 00:55:30.584396 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.584401 | orchestrator | 2026-02-05 00:55:30.584406 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:55:30.584410 | orchestrator | Thursday 05 February 2026 00:45:38 +0000 (0:00:01.890) 0:00:53.723 ***** 2026-02-05 00:55:30.584418 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.584423 | orchestrator | 2026-02-05 00:55:30.584427 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:55:30.584432 | orchestrator | Thursday 05 February 2026 00:45:40 +0000 (0:00:01.454) 0:00:55.177 ***** 2026-02-05 00:55:30.584436 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.584441 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.584445 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.584450 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.584454 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.584458 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.584463 | orchestrator | 2026-02-05 00:55:30.584467 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:55:30.584471 | orchestrator | Thursday 05 February 2026 00:45:42 +0000 (0:00:01.842) 0:00:57.020 ***** 2026-02-05 00:55:30.584476 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.584480 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.584485 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.584489 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.584493 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.584498 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.584502 | orchestrator | 2026-02-05 00:55:30.584507 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:55:30.584511 | orchestrator | Thursday 05 February 2026 00:45:42 +0000 (0:00:00.989) 0:00:58.009 ***** 2026-02-05 00:55:30.584516 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.584520 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.584524 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.584529 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.584533 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.584538 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.584542 | orchestrator | 2026-02-05 00:55:30.584546 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:55:30.584551 | orchestrator | Thursday 05 February 2026 00:45:43 +0000 (0:00:00.976) 0:00:58.986 ***** 2026-02-05 00:55:30.584555 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.584560 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.584565 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.584569 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.584574 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.584577 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.584581 | orchestrator | 2026-02-05 00:55:30.584585 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:55:30.584588 | orchestrator | Thursday 05 February 2026 00:45:44 +0000 (0:00:00.911) 0:00:59.897 ***** 2026-02-05 00:55:30.584592 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.584596 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.584600 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.584603 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.584607 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.584623 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.584628 | orchestrator | 2026-02-05 00:55:30.584632 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:55:30.584635 | orchestrator | Thursday 05 February 2026 00:45:46 +0000 (0:00:01.417) 0:01:01.315 ***** 2026-02-05 00:55:30.584639 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.584643 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.584666 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.584670 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.584674 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.584678 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.584687 | orchestrator | 2026-02-05 00:55:30.584691 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:55:30.584695 | orchestrator | Thursday 05 February 2026 00:45:47 +0000 (0:00:01.569) 0:01:02.884 ***** 2026-02-05 00:55:30.584699 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.584703 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.584707 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.584710 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.584714 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.584718 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.584722 | orchestrator | 2026-02-05 00:55:30.584725 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:55:30.584729 | orchestrator | Thursday 05 February 2026 00:45:48 +0000 (0:00:00.680) 0:01:03.565 ***** 2026-02-05 00:55:30.584733 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.584737 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.584743 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.584747 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.584751 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.584755 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.584759 | orchestrator | 2026-02-05 00:55:30.584762 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:55:30.584766 | orchestrator | Thursday 05 February 2026 00:45:50 +0000 (0:00:01.753) 0:01:05.318 ***** 2026-02-05 00:55:30.584770 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.584774 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.584778 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.584781 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.584785 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.584789 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.584793 | orchestrator | 2026-02-05 00:55:30.584796 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:55:30.584800 | orchestrator | Thursday 05 February 2026 00:45:52 +0000 (0:00:01.745) 0:01:07.064 ***** 2026-02-05 00:55:30.584804 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.584808 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.584812 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.584815 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.584819 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.584823 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.584827 | orchestrator | 2026-02-05 00:55:30.584831 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:55:30.584835 | orchestrator | Thursday 05 February 2026 00:45:53 +0000 (0:00:01.679) 0:01:08.743 ***** 2026-02-05 00:55:30.584838 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.584842 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.584846 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.584850 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.584853 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.584857 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.584861 | orchestrator | 2026-02-05 00:55:30.584865 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:55:30.584869 | orchestrator | Thursday 05 February 2026 00:45:54 +0000 (0:00:01.013) 0:01:09.756 ***** 2026-02-05 00:55:30.584873 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.584877 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.584881 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.584885 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.584888 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.584892 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.584896 | orchestrator | 2026-02-05 00:55:30.584900 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:55:30.584904 | orchestrator | Thursday 05 February 2026 00:45:55 +0000 (0:00:00.907) 0:01:10.664 ***** 2026-02-05 00:55:30.584907 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.584914 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.584918 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.584922 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.584926 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.584930 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.584934 | orchestrator | 2026-02-05 00:55:30.584937 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:55:30.584941 | orchestrator | Thursday 05 February 2026 00:45:56 +0000 (0:00:00.657) 0:01:11.321 ***** 2026-02-05 00:55:30.584945 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.584949 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.584952 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.584956 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.584960 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.584964 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.584968 | orchestrator | 2026-02-05 00:55:30.584971 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:55:30.584975 | orchestrator | Thursday 05 February 2026 00:45:57 +0000 (0:00:00.724) 0:01:12.046 ***** 2026-02-05 00:55:30.584979 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.584983 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.584987 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.584990 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.584994 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.584998 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.585002 | orchestrator | 2026-02-05 00:55:30.585005 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:55:30.585009 | orchestrator | Thursday 05 February 2026 00:45:58 +0000 (0:00:01.018) 0:01:13.065 ***** 2026-02-05 00:55:30.585013 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585017 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.585021 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.585024 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.585042 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.585046 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.585050 | orchestrator | 2026-02-05 00:55:30.585054 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:55:30.585058 | orchestrator | Thursday 05 February 2026 00:45:59 +0000 (0:00:01.125) 0:01:14.191 ***** 2026-02-05 00:55:30.585062 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585066 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.585070 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.585074 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.585078 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.585082 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.585086 | orchestrator | 2026-02-05 00:55:30.585090 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:55:30.585094 | orchestrator | Thursday 05 February 2026 00:46:00 +0000 (0:00:00.860) 0:01:15.051 ***** 2026-02-05 00:55:30.585098 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.585101 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.585105 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.585109 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.585113 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.585117 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.585121 | orchestrator | 2026-02-05 00:55:30.585125 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:55:30.585129 | orchestrator | Thursday 05 February 2026 00:46:01 +0000 (0:00:01.026) 0:01:16.078 ***** 2026-02-05 00:55:30.585133 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.585137 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.585144 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.585148 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.585152 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.585159 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.585163 | orchestrator | 2026-02-05 00:55:30.585167 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 00:55:30.585171 | orchestrator | Thursday 05 February 2026 00:46:02 +0000 (0:00:01.837) 0:01:17.915 ***** 2026-02-05 00:55:30.585175 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.585179 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.585183 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.585187 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.585191 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.585195 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.585199 | orchestrator | 2026-02-05 00:55:30.585203 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 00:55:30.585207 | orchestrator | Thursday 05 February 2026 00:46:04 +0000 (0:00:01.928) 0:01:19.844 ***** 2026-02-05 00:55:30.585211 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.585215 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.585219 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.585223 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.585227 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.585231 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.585234 | orchestrator | 2026-02-05 00:55:30.585239 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 00:55:30.585243 | orchestrator | Thursday 05 February 2026 00:46:07 +0000 (0:00:02.520) 0:01:22.364 ***** 2026-02-05 00:55:30.585247 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.585251 | orchestrator | 2026-02-05 00:55:30.585255 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 00:55:30.585259 | orchestrator | Thursday 05 February 2026 00:46:08 +0000 (0:00:01.078) 0:01:23.442 ***** 2026-02-05 00:55:30.585263 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585267 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.585271 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.585274 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.585278 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.585282 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.585286 | orchestrator | 2026-02-05 00:55:30.585290 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 00:55:30.585294 | orchestrator | Thursday 05 February 2026 00:46:08 +0000 (0:00:00.549) 0:01:23.992 ***** 2026-02-05 00:55:30.585298 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585302 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.585306 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.585310 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.585314 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.585318 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.585322 | orchestrator | 2026-02-05 00:55:30.585326 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 00:55:30.585330 | orchestrator | Thursday 05 February 2026 00:46:09 +0000 (0:00:00.646) 0:01:24.639 ***** 2026-02-05 00:55:30.585334 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 00:55:30.585338 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 00:55:30.585342 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 00:55:30.585346 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 00:55:30.585350 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 00:55:30.585353 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 00:55:30.585357 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 00:55:30.585365 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 00:55:30.585369 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 00:55:30.585373 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 00:55:30.585389 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 00:55:30.585393 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 00:55:30.585397 | orchestrator | 2026-02-05 00:55:30.585401 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 00:55:30.585405 | orchestrator | Thursday 05 February 2026 00:46:10 +0000 (0:00:01.266) 0:01:25.905 ***** 2026-02-05 00:55:30.585409 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.585413 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.585417 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.585421 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.585425 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.585429 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.585433 | orchestrator | 2026-02-05 00:55:30.585437 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 00:55:30.585441 | orchestrator | Thursday 05 February 2026 00:46:11 +0000 (0:00:01.052) 0:01:26.958 ***** 2026-02-05 00:55:30.585445 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585449 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.585453 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.585457 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.585461 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.585465 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.585469 | orchestrator | 2026-02-05 00:55:30.585476 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 00:55:30.585480 | orchestrator | Thursday 05 February 2026 00:46:12 +0000 (0:00:00.545) 0:01:27.504 ***** 2026-02-05 00:55:30.585484 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585488 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.585492 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.585495 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.585499 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.585503 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.585507 | orchestrator | 2026-02-05 00:55:30.585511 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 00:55:30.585515 | orchestrator | Thursday 05 February 2026 00:46:13 +0000 (0:00:00.792) 0:01:28.296 ***** 2026-02-05 00:55:30.585519 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585523 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.585527 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.585531 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.585535 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.585539 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.585543 | orchestrator | 2026-02-05 00:55:30.585547 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 00:55:30.585551 | orchestrator | Thursday 05 February 2026 00:46:13 +0000 (0:00:00.593) 0:01:28.889 ***** 2026-02-05 00:55:30.585555 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.585559 | orchestrator | 2026-02-05 00:55:30.585563 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 00:55:30.585567 | orchestrator | Thursday 05 February 2026 00:46:15 +0000 (0:00:01.122) 0:01:30.012 ***** 2026-02-05 00:55:30.585571 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.585575 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.585583 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.585586 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.585590 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.585594 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.585598 | orchestrator | 2026-02-05 00:55:30.585602 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 00:55:30.585606 | orchestrator | Thursday 05 February 2026 00:47:09 +0000 (0:00:54.428) 0:02:24.440 ***** 2026-02-05 00:55:30.585610 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 00:55:30.585614 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 00:55:30.585618 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 00:55:30.585622 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585626 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 00:55:30.585630 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 00:55:30.585634 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 00:55:30.585638 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 00:55:30.585642 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 00:55:30.585658 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 00:55:30.585662 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.585666 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 00:55:30.585670 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 00:55:30.585674 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 00:55:30.585678 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.585682 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.585686 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 00:55:30.585690 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 00:55:30.585694 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 00:55:30.585699 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.585718 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 00:55:30.585722 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 00:55:30.585726 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 00:55:30.585730 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.585734 | orchestrator | 2026-02-05 00:55:30.585738 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 00:55:30.585742 | orchestrator | Thursday 05 February 2026 00:47:10 +0000 (0:00:00.849) 0:02:25.290 ***** 2026-02-05 00:55:30.585746 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585750 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.585754 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.585758 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.585762 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.585766 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.585770 | orchestrator | 2026-02-05 00:55:30.585774 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 00:55:30.585778 | orchestrator | Thursday 05 February 2026 00:47:10 +0000 (0:00:00.485) 0:02:25.776 ***** 2026-02-05 00:55:30.585782 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585786 | orchestrator | 2026-02-05 00:55:30.585790 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 00:55:30.585794 | orchestrator | Thursday 05 February 2026 00:47:11 +0000 (0:00:00.248) 0:02:26.025 ***** 2026-02-05 00:55:30.585805 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585812 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.585816 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.585820 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.585823 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.585827 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.585831 | orchestrator | 2026-02-05 00:55:30.585835 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 00:55:30.585838 | orchestrator | Thursday 05 February 2026 00:47:11 +0000 (0:00:00.594) 0:02:26.619 ***** 2026-02-05 00:55:30.585842 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585846 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.585849 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.585853 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.585857 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.585860 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.585864 | orchestrator | 2026-02-05 00:55:30.585868 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 00:55:30.585872 | orchestrator | Thursday 05 February 2026 00:47:12 +0000 (0:00:00.516) 0:02:27.135 ***** 2026-02-05 00:55:30.585875 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585879 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.585883 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.585886 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.585890 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.585894 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.585897 | orchestrator | 2026-02-05 00:55:30.585901 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 00:55:30.585905 | orchestrator | Thursday 05 February 2026 00:47:12 +0000 (0:00:00.750) 0:02:27.886 ***** 2026-02-05 00:55:30.585909 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.585912 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.585916 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.585920 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.585924 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.585927 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.585931 | orchestrator | 2026-02-05 00:55:30.585935 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 00:55:30.585938 | orchestrator | Thursday 05 February 2026 00:47:14 +0000 (0:00:02.048) 0:02:29.935 ***** 2026-02-05 00:55:30.585942 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.585946 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.585949 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.585953 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.585957 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.585960 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.585964 | orchestrator | 2026-02-05 00:55:30.585968 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 00:55:30.585972 | orchestrator | Thursday 05 February 2026 00:47:15 +0000 (0:00:00.882) 0:02:30.817 ***** 2026-02-05 00:55:30.585976 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.585981 | orchestrator | 2026-02-05 00:55:30.585984 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 00:55:30.585988 | orchestrator | Thursday 05 February 2026 00:47:16 +0000 (0:00:01.052) 0:02:31.870 ***** 2026-02-05 00:55:30.585992 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.585995 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.585999 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.586003 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.586006 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.586010 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.586054 | orchestrator | 2026-02-05 00:55:30.586059 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 00:55:30.586063 | orchestrator | Thursday 05 February 2026 00:47:17 +0000 (0:00:00.488) 0:02:32.359 ***** 2026-02-05 00:55:30.586066 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.586070 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.586074 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.586078 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.586081 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.586085 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.586089 | orchestrator | 2026-02-05 00:55:30.586093 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 00:55:30.586096 | orchestrator | Thursday 05 February 2026 00:47:18 +0000 (0:00:00.808) 0:02:33.167 ***** 2026-02-05 00:55:30.586100 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.586104 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.586122 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.586126 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.586130 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.586134 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.586137 | orchestrator | 2026-02-05 00:55:30.586141 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 00:55:30.586145 | orchestrator | Thursday 05 February 2026 00:47:18 +0000 (0:00:00.613) 0:02:33.781 ***** 2026-02-05 00:55:30.586149 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.586152 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.586156 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.586160 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.586164 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.586167 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.586171 | orchestrator | 2026-02-05 00:55:30.586175 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 00:55:30.586179 | orchestrator | Thursday 05 February 2026 00:47:19 +0000 (0:00:00.913) 0:02:34.695 ***** 2026-02-05 00:55:30.586182 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.586186 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.586189 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.586193 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.586197 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.586201 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.586204 | orchestrator | 2026-02-05 00:55:30.586208 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 00:55:30.586215 | orchestrator | Thursday 05 February 2026 00:47:20 +0000 (0:00:00.653) 0:02:35.349 ***** 2026-02-05 00:55:30.586218 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.586222 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.586226 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.586230 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.586233 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.586237 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.586241 | orchestrator | 2026-02-05 00:55:30.586245 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 00:55:30.586249 | orchestrator | Thursday 05 February 2026 00:47:21 +0000 (0:00:00.674) 0:02:36.024 ***** 2026-02-05 00:55:30.586252 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.586256 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.586260 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.586263 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.586267 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.586271 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.586275 | orchestrator | 2026-02-05 00:55:30.586278 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 00:55:30.586282 | orchestrator | Thursday 05 February 2026 00:47:21 +0000 (0:00:00.592) 0:02:36.616 ***** 2026-02-05 00:55:30.586289 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.586293 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.586297 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.586301 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.586304 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.586308 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.586311 | orchestrator | 2026-02-05 00:55:30.586315 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 00:55:30.586319 | orchestrator | Thursday 05 February 2026 00:47:22 +0000 (0:00:00.675) 0:02:37.292 ***** 2026-02-05 00:55:30.586322 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.586326 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.586330 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.586333 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.586337 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.586341 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.586345 | orchestrator | 2026-02-05 00:55:30.586348 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 00:55:30.586352 | orchestrator | Thursday 05 February 2026 00:47:23 +0000 (0:00:00.921) 0:02:38.214 ***** 2026-02-05 00:55:30.586356 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.586359 | orchestrator | 2026-02-05 00:55:30.586363 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 00:55:30.586367 | orchestrator | Thursday 05 February 2026 00:47:24 +0000 (0:00:00.846) 0:02:39.060 ***** 2026-02-05 00:55:30.586371 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-05 00:55:30.586374 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-05 00:55:30.586378 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-05 00:55:30.586382 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-05 00:55:30.586385 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-05 00:55:30.586389 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-05 00:55:30.586393 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-05 00:55:30.586397 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-05 00:55:30.586400 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-05 00:55:30.586404 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-05 00:55:30.586408 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-05 00:55:30.586412 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-05 00:55:30.586415 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-05 00:55:30.586419 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-05 00:55:30.586423 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-05 00:55:30.586426 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-05 00:55:30.586430 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-05 00:55:30.586434 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-05 00:55:30.586450 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-05 00:55:30.586455 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-05 00:55:30.586458 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-05 00:55:30.586462 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-05 00:55:30.586466 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-05 00:55:30.586469 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-05 00:55:30.586473 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-05 00:55:30.586477 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-05 00:55:30.586484 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-05 00:55:30.586487 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-05 00:55:30.586491 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-05 00:55:30.586495 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-05 00:55:30.586498 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-05 00:55:30.586502 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-05 00:55:30.586506 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-05 00:55:30.586509 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-05 00:55:30.586516 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-05 00:55:30.586520 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-05 00:55:30.586524 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-05 00:55:30.586528 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-05 00:55:30.586531 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-05 00:55:30.586535 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-05 00:55:30.586539 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-05 00:55:30.586542 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-05 00:55:30.586546 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-05 00:55:30.586550 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-05 00:55:30.586553 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-05 00:55:30.586557 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-05 00:55:30.586561 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-05 00:55:30.586564 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 00:55:30.586569 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-05 00:55:30.586572 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 00:55:30.586576 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 00:55:30.586580 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 00:55:30.586583 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 00:55:30.586587 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 00:55:30.586591 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 00:55:30.586594 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 00:55:30.586598 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 00:55:30.586602 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 00:55:30.586606 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 00:55:30.586609 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 00:55:30.586613 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 00:55:30.586617 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 00:55:30.586621 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 00:55:30.586624 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 00:55:30.586628 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 00:55:30.586632 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 00:55:30.586636 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 00:55:30.586672 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 00:55:30.586678 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 00:55:30.586681 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 00:55:30.586685 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 00:55:30.586689 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 00:55:30.586693 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 00:55:30.586697 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 00:55:30.586701 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 00:55:30.586704 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 00:55:30.586723 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 00:55:30.586727 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 00:55:30.586731 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 00:55:30.586735 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 00:55:30.586739 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 00:55:30.586743 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-05 00:55:30.586747 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 00:55:30.586751 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-05 00:55:30.586755 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-05 00:55:30.586759 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 00:55:30.586763 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-05 00:55:30.586767 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 00:55:30.586771 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-05 00:55:30.586775 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-05 00:55:30.586779 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-05 00:55:30.586786 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-05 00:55:30.586790 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-05 00:55:30.586794 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-05 00:55:30.586798 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-05 00:55:30.586802 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-05 00:55:30.586806 | orchestrator | 2026-02-05 00:55:30.586810 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 00:55:30.586814 | orchestrator | Thursday 05 February 2026 00:47:30 +0000 (0:00:06.715) 0:02:45.775 ***** 2026-02-05 00:55:30.586818 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.586821 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.586825 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.586830 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.586834 | orchestrator | 2026-02-05 00:55:30.586838 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-05 00:55:30.586842 | orchestrator | Thursday 05 February 2026 00:47:32 +0000 (0:00:01.252) 0:02:47.027 ***** 2026-02-05 00:55:30.586845 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.586850 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.586858 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.586862 | orchestrator | 2026-02-05 00:55:30.586866 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-05 00:55:30.586870 | orchestrator | Thursday 05 February 2026 00:47:32 +0000 (0:00:00.784) 0:02:47.812 ***** 2026-02-05 00:55:30.586874 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.586878 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.586882 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.586886 | orchestrator | 2026-02-05 00:55:30.586890 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 00:55:30.586893 | orchestrator | Thursday 05 February 2026 00:47:34 +0000 (0:00:01.619) 0:02:49.432 ***** 2026-02-05 00:55:30.586897 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.586901 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.586905 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.586909 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.586913 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.586917 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.586921 | orchestrator | 2026-02-05 00:55:30.586925 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 00:55:30.586929 | orchestrator | Thursday 05 February 2026 00:47:35 +0000 (0:00:00.953) 0:02:50.385 ***** 2026-02-05 00:55:30.586933 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.586937 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.586941 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.586944 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.586948 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.586952 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.586956 | orchestrator | 2026-02-05 00:55:30.586960 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 00:55:30.586964 | orchestrator | Thursday 05 February 2026 00:47:36 +0000 (0:00:00.699) 0:02:51.084 ***** 2026-02-05 00:55:30.586968 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.586972 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.586976 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.586980 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.586984 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.586988 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.586991 | orchestrator | 2026-02-05 00:55:30.587009 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 00:55:30.587014 | orchestrator | Thursday 05 February 2026 00:47:36 +0000 (0:00:00.791) 0:02:51.876 ***** 2026-02-05 00:55:30.587018 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587022 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.587026 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.587030 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587033 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587037 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587041 | orchestrator | 2026-02-05 00:55:30.587045 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 00:55:30.587049 | orchestrator | Thursday 05 February 2026 00:47:37 +0000 (0:00:00.604) 0:02:52.481 ***** 2026-02-05 00:55:30.587053 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587056 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.587060 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.587064 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587068 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587075 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587079 | orchestrator | 2026-02-05 00:55:30.587083 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 00:55:30.587087 | orchestrator | Thursday 05 February 2026 00:47:38 +0000 (0:00:00.817) 0:02:53.299 ***** 2026-02-05 00:55:30.587091 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587094 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.587098 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.587105 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587109 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587112 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587116 | orchestrator | 2026-02-05 00:55:30.587120 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 00:55:30.587124 | orchestrator | Thursday 05 February 2026 00:47:38 +0000 (0:00:00.603) 0:02:53.902 ***** 2026-02-05 00:55:30.587128 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587132 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.587135 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.587139 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587143 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587147 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587151 | orchestrator | 2026-02-05 00:55:30.587155 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 00:55:30.587159 | orchestrator | Thursday 05 February 2026 00:47:40 +0000 (0:00:01.356) 0:02:55.259 ***** 2026-02-05 00:55:30.587162 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587166 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.587170 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.587174 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587178 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587182 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587185 | orchestrator | 2026-02-05 00:55:30.587189 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 00:55:30.587193 | orchestrator | Thursday 05 February 2026 00:47:40 +0000 (0:00:00.640) 0:02:55.899 ***** 2026-02-05 00:55:30.587197 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587201 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587205 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587209 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.587213 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.587216 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.587220 | orchestrator | 2026-02-05 00:55:30.587224 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 00:55:30.587228 | orchestrator | Thursday 05 February 2026 00:47:43 +0000 (0:00:03.010) 0:02:58.909 ***** 2026-02-05 00:55:30.587232 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.587236 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.587240 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.587244 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587247 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587251 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587255 | orchestrator | 2026-02-05 00:55:30.587259 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 00:55:30.587263 | orchestrator | Thursday 05 February 2026 00:47:44 +0000 (0:00:00.742) 0:02:59.652 ***** 2026-02-05 00:55:30.587267 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.587271 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.587275 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.587278 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587282 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587286 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587290 | orchestrator | 2026-02-05 00:55:30.587299 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 00:55:30.587303 | orchestrator | Thursday 05 February 2026 00:47:45 +0000 (0:00:01.282) 0:03:00.934 ***** 2026-02-05 00:55:30.587307 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587311 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.587315 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587318 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.587322 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587326 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587330 | orchestrator | 2026-02-05 00:55:30.587334 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 00:55:30.587338 | orchestrator | Thursday 05 February 2026 00:47:46 +0000 (0:00:00.719) 0:03:01.654 ***** 2026-02-05 00:55:30.587342 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.587346 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.587350 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.587353 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587369 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587374 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587378 | orchestrator | 2026-02-05 00:55:30.587382 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 00:55:30.587385 | orchestrator | Thursday 05 February 2026 00:47:47 +0000 (0:00:00.801) 0:03:02.455 ***** 2026-02-05 00:55:30.587391 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-05 00:55:30.587395 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-05 00:55:30.587402 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-05 00:55:30.587406 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-05 00:55:30.587410 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587414 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-05 00:55:30.587418 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-05 00:55:30.587422 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.587426 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.587433 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587437 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587441 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587444 | orchestrator | 2026-02-05 00:55:30.587448 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 00:55:30.587452 | orchestrator | Thursday 05 February 2026 00:47:48 +0000 (0:00:00.707) 0:03:03.162 ***** 2026-02-05 00:55:30.587456 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.587460 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.587464 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587468 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587471 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587475 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587479 | orchestrator | 2026-02-05 00:55:30.587483 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 00:55:30.587487 | orchestrator | Thursday 05 February 2026 00:47:48 +0000 (0:00:00.757) 0:03:03.920 ***** 2026-02-05 00:55:30.587491 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587495 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.587499 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.587503 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587507 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587510 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587514 | orchestrator | 2026-02-05 00:55:30.587518 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 00:55:30.587522 | orchestrator | Thursday 05 February 2026 00:47:49 +0000 (0:00:00.522) 0:03:04.442 ***** 2026-02-05 00:55:30.587526 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587530 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.587534 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.587538 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587541 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587545 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587549 | orchestrator | 2026-02-05 00:55:30.587553 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 00:55:30.587557 | orchestrator | Thursday 05 February 2026 00:47:50 +0000 (0:00:00.878) 0:03:05.321 ***** 2026-02-05 00:55:30.587561 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587565 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.587569 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.587573 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587577 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587581 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587584 | orchestrator | 2026-02-05 00:55:30.587588 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 00:55:30.587605 | orchestrator | Thursday 05 February 2026 00:47:51 +0000 (0:00:00.804) 0:03:06.125 ***** 2026-02-05 00:55:30.587609 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587613 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.587617 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.587620 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587624 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587628 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587631 | orchestrator | 2026-02-05 00:55:30.587635 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 00:55:30.587639 | orchestrator | Thursday 05 February 2026 00:47:51 +0000 (0:00:00.830) 0:03:06.956 ***** 2026-02-05 00:55:30.587644 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.587660 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.587664 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587668 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.587672 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587676 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587683 | orchestrator | 2026-02-05 00:55:30.587687 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 00:55:30.587691 | orchestrator | Thursday 05 February 2026 00:47:52 +0000 (0:00:00.688) 0:03:07.645 ***** 2026-02-05 00:55:30.587695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.587699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.587702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.587709 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587713 | orchestrator | 2026-02-05 00:55:30.587717 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 00:55:30.587721 | orchestrator | Thursday 05 February 2026 00:47:53 +0000 (0:00:00.372) 0:03:08.017 ***** 2026-02-05 00:55:30.587725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.587728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.587732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.587736 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587740 | orchestrator | 2026-02-05 00:55:30.587744 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 00:55:30.587747 | orchestrator | Thursday 05 February 2026 00:47:53 +0000 (0:00:00.361) 0:03:08.379 ***** 2026-02-05 00:55:30.587751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.587755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.587759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.587763 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587766 | orchestrator | 2026-02-05 00:55:30.587770 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 00:55:30.587774 | orchestrator | Thursday 05 February 2026 00:47:53 +0000 (0:00:00.513) 0:03:08.892 ***** 2026-02-05 00:55:30.587778 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.587782 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.587786 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.587789 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587793 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587797 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587801 | orchestrator | 2026-02-05 00:55:30.587805 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 00:55:30.587809 | orchestrator | Thursday 05 February 2026 00:47:54 +0000 (0:00:00.966) 0:03:09.858 ***** 2026-02-05 00:55:30.587812 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 00:55:30.587816 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 00:55:30.587820 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-05 00:55:30.587824 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 00:55:30.587828 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.587832 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-05 00:55:30.587835 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.587839 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-05 00:55:30.587843 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.587847 | orchestrator | 2026-02-05 00:55:30.587850 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 00:55:30.587854 | orchestrator | Thursday 05 February 2026 00:47:56 +0000 (0:00:02.031) 0:03:11.889 ***** 2026-02-05 00:55:30.587858 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.587862 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.587866 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.587870 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.587873 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.587877 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.587881 | orchestrator | 2026-02-05 00:55:30.587885 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 00:55:30.587892 | orchestrator | Thursday 05 February 2026 00:47:59 +0000 (0:00:02.528) 0:03:14.418 ***** 2026-02-05 00:55:30.587896 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.587900 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.587904 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.587908 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.587912 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.587915 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.587919 | orchestrator | 2026-02-05 00:55:30.587923 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-05 00:55:30.587927 | orchestrator | Thursday 05 February 2026 00:48:00 +0000 (0:00:01.275) 0:03:15.693 ***** 2026-02-05 00:55:30.587931 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.587934 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.587938 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.587942 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.587946 | orchestrator | 2026-02-05 00:55:30.587950 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-05 00:55:30.587969 | orchestrator | Thursday 05 February 2026 00:48:01 +0000 (0:00:00.873) 0:03:16.567 ***** 2026-02-05 00:55:30.587974 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.587978 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.587981 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.587985 | orchestrator | 2026-02-05 00:55:30.587989 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-05 00:55:30.587993 | orchestrator | Thursday 05 February 2026 00:48:01 +0000 (0:00:00.321) 0:03:16.888 ***** 2026-02-05 00:55:30.587997 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.588000 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.588004 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.588008 | orchestrator | 2026-02-05 00:55:30.588012 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-05 00:55:30.588015 | orchestrator | Thursday 05 February 2026 00:48:03 +0000 (0:00:01.266) 0:03:18.155 ***** 2026-02-05 00:55:30.588019 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 00:55:30.588023 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 00:55:30.588026 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 00:55:30.588030 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.588034 | orchestrator | 2026-02-05 00:55:30.588038 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-05 00:55:30.588041 | orchestrator | Thursday 05 February 2026 00:48:03 +0000 (0:00:00.568) 0:03:18.724 ***** 2026-02-05 00:55:30.588045 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.588049 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.588053 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.588056 | orchestrator | 2026-02-05 00:55:30.588063 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-05 00:55:30.588066 | orchestrator | Thursday 05 February 2026 00:48:04 +0000 (0:00:00.369) 0:03:19.093 ***** 2026-02-05 00:55:30.588070 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.588074 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.588078 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.588081 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.588085 | orchestrator | 2026-02-05 00:55:30.588089 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-05 00:55:30.588093 | orchestrator | Thursday 05 February 2026 00:48:05 +0000 (0:00:00.939) 0:03:20.033 ***** 2026-02-05 00:55:30.588096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.588100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.588108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.588112 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588115 | orchestrator | 2026-02-05 00:55:30.588119 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-05 00:55:30.588123 | orchestrator | Thursday 05 February 2026 00:48:05 +0000 (0:00:00.421) 0:03:20.454 ***** 2026-02-05 00:55:30.588127 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588130 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.588134 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.588138 | orchestrator | 2026-02-05 00:55:30.588142 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-05 00:55:30.588145 | orchestrator | Thursday 05 February 2026 00:48:05 +0000 (0:00:00.324) 0:03:20.778 ***** 2026-02-05 00:55:30.588149 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588153 | orchestrator | 2026-02-05 00:55:30.588156 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-05 00:55:30.588160 | orchestrator | Thursday 05 February 2026 00:48:06 +0000 (0:00:00.237) 0:03:21.016 ***** 2026-02-05 00:55:30.588164 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588168 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.588171 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.588175 | orchestrator | 2026-02-05 00:55:30.588179 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-05 00:55:30.588182 | orchestrator | Thursday 05 February 2026 00:48:06 +0000 (0:00:00.537) 0:03:21.553 ***** 2026-02-05 00:55:30.588186 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588190 | orchestrator | 2026-02-05 00:55:30.588194 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-05 00:55:30.588197 | orchestrator | Thursday 05 February 2026 00:48:06 +0000 (0:00:00.214) 0:03:21.768 ***** 2026-02-05 00:55:30.588201 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588205 | orchestrator | 2026-02-05 00:55:30.588208 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-05 00:55:30.588212 | orchestrator | Thursday 05 February 2026 00:48:06 +0000 (0:00:00.222) 0:03:21.991 ***** 2026-02-05 00:55:30.588216 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588220 | orchestrator | 2026-02-05 00:55:30.588223 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-05 00:55:30.588227 | orchestrator | Thursday 05 February 2026 00:48:07 +0000 (0:00:00.124) 0:03:22.115 ***** 2026-02-05 00:55:30.588231 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588235 | orchestrator | 2026-02-05 00:55:30.588238 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-05 00:55:30.588242 | orchestrator | Thursday 05 February 2026 00:48:07 +0000 (0:00:00.211) 0:03:22.327 ***** 2026-02-05 00:55:30.588246 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588250 | orchestrator | 2026-02-05 00:55:30.588253 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-05 00:55:30.588257 | orchestrator | Thursday 05 February 2026 00:48:07 +0000 (0:00:00.204) 0:03:22.531 ***** 2026-02-05 00:55:30.588261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.588264 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.588268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.588272 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588276 | orchestrator | 2026-02-05 00:55:30.588279 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-05 00:55:30.588296 | orchestrator | Thursday 05 February 2026 00:48:07 +0000 (0:00:00.372) 0:03:22.904 ***** 2026-02-05 00:55:30.588301 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588305 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.588309 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.588313 | orchestrator | 2026-02-05 00:55:30.588316 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-05 00:55:30.588324 | orchestrator | Thursday 05 February 2026 00:48:08 +0000 (0:00:00.300) 0:03:23.205 ***** 2026-02-05 00:55:30.588328 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588332 | orchestrator | 2026-02-05 00:55:30.588335 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-05 00:55:30.588339 | orchestrator | Thursday 05 February 2026 00:48:08 +0000 (0:00:00.780) 0:03:23.985 ***** 2026-02-05 00:55:30.588343 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588347 | orchestrator | 2026-02-05 00:55:30.588350 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-05 00:55:30.588354 | orchestrator | Thursday 05 February 2026 00:48:09 +0000 (0:00:00.236) 0:03:24.222 ***** 2026-02-05 00:55:30.588358 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.588361 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.588365 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.588369 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-02-05 00:55:30.588373 | orchestrator | 2026-02-05 00:55:30.588379 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-05 00:55:30.588383 | orchestrator | Thursday 05 February 2026 00:48:10 +0000 (0:00:00.974) 0:03:25.197 ***** 2026-02-05 00:55:30.588386 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.588390 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.588394 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.588397 | orchestrator | 2026-02-05 00:55:30.588401 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-05 00:55:30.588405 | orchestrator | Thursday 05 February 2026 00:48:10 +0000 (0:00:00.600) 0:03:25.798 ***** 2026-02-05 00:55:30.588409 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.588413 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.588416 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.588420 | orchestrator | 2026-02-05 00:55:30.588424 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-05 00:55:30.588427 | orchestrator | Thursday 05 February 2026 00:48:12 +0000 (0:00:01.235) 0:03:27.033 ***** 2026-02-05 00:55:30.588431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.588435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.588438 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.588442 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588447 | orchestrator | 2026-02-05 00:55:30.588454 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-05 00:55:30.588460 | orchestrator | Thursday 05 February 2026 00:48:12 +0000 (0:00:00.631) 0:03:27.664 ***** 2026-02-05 00:55:30.588466 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.588472 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.588480 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.588486 | orchestrator | 2026-02-05 00:55:30.588492 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-05 00:55:30.588497 | orchestrator | Thursday 05 February 2026 00:48:13 +0000 (0:00:00.398) 0:03:28.062 ***** 2026-02-05 00:55:30.588502 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.588508 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.588514 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.588521 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.588532 | orchestrator | 2026-02-05 00:55:30.588540 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-05 00:55:30.588546 | orchestrator | Thursday 05 February 2026 00:48:14 +0000 (0:00:01.009) 0:03:29.072 ***** 2026-02-05 00:55:30.588552 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.588558 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.588564 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.588578 | orchestrator | 2026-02-05 00:55:30.588584 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-05 00:55:30.588591 | orchestrator | Thursday 05 February 2026 00:48:14 +0000 (0:00:00.265) 0:03:29.338 ***** 2026-02-05 00:55:30.588597 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.588602 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.588608 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.588614 | orchestrator | 2026-02-05 00:55:30.588620 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-05 00:55:30.588626 | orchestrator | Thursday 05 February 2026 00:48:15 +0000 (0:00:01.172) 0:03:30.511 ***** 2026-02-05 00:55:30.588632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.588638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.588644 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.588680 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588686 | orchestrator | 2026-02-05 00:55:30.588691 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-05 00:55:30.588697 | orchestrator | Thursday 05 February 2026 00:48:16 +0000 (0:00:00.829) 0:03:31.340 ***** 2026-02-05 00:55:30.588702 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.588708 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.588714 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.588719 | orchestrator | 2026-02-05 00:55:30.588725 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-05 00:55:30.588731 | orchestrator | Thursday 05 February 2026 00:48:16 +0000 (0:00:00.431) 0:03:31.772 ***** 2026-02-05 00:55:30.588737 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588744 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.588750 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.588757 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.588761 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.588796 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.588800 | orchestrator | 2026-02-05 00:55:30.588804 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-05 00:55:30.588808 | orchestrator | Thursday 05 February 2026 00:48:17 +0000 (0:00:00.525) 0:03:32.297 ***** 2026-02-05 00:55:30.588812 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.588816 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.588819 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.588823 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.588827 | orchestrator | 2026-02-05 00:55:30.588831 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-05 00:55:30.588835 | orchestrator | Thursday 05 February 2026 00:48:18 +0000 (0:00:00.909) 0:03:33.207 ***** 2026-02-05 00:55:30.588839 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.588843 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.588846 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.588850 | orchestrator | 2026-02-05 00:55:30.588854 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-05 00:55:30.588858 | orchestrator | Thursday 05 February 2026 00:48:18 +0000 (0:00:00.280) 0:03:33.487 ***** 2026-02-05 00:55:30.588862 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.588865 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.588869 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.588873 | orchestrator | 2026-02-05 00:55:30.588876 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-05 00:55:30.588886 | orchestrator | Thursday 05 February 2026 00:48:19 +0000 (0:00:01.480) 0:03:34.967 ***** 2026-02-05 00:55:30.588890 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 00:55:30.588893 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 00:55:30.588897 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 00:55:30.588905 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.588909 | orchestrator | 2026-02-05 00:55:30.588913 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-05 00:55:30.588917 | orchestrator | Thursday 05 February 2026 00:48:20 +0000 (0:00:00.899) 0:03:35.867 ***** 2026-02-05 00:55:30.588920 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.588924 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.588928 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.588932 | orchestrator | 2026-02-05 00:55:30.588936 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-05 00:55:30.588940 | orchestrator | 2026-02-05 00:55:30.588944 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:55:30.588947 | orchestrator | Thursday 05 February 2026 00:48:21 +0000 (0:00:00.497) 0:03:36.364 ***** 2026-02-05 00:55:30.588951 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.588956 | orchestrator | 2026-02-05 00:55:30.588960 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:55:30.588963 | orchestrator | Thursday 05 February 2026 00:48:21 +0000 (0:00:00.440) 0:03:36.805 ***** 2026-02-05 00:55:30.588967 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.588971 | orchestrator | 2026-02-05 00:55:30.588975 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:55:30.588978 | orchestrator | Thursday 05 February 2026 00:48:22 +0000 (0:00:00.605) 0:03:37.410 ***** 2026-02-05 00:55:30.588982 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.588986 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.588990 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.588994 | orchestrator | 2026-02-05 00:55:30.588998 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:55:30.589001 | orchestrator | Thursday 05 February 2026 00:48:23 +0000 (0:00:00.661) 0:03:38.072 ***** 2026-02-05 00:55:30.589005 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.589009 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.589013 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.589017 | orchestrator | 2026-02-05 00:55:30.589020 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:55:30.589024 | orchestrator | Thursday 05 February 2026 00:48:23 +0000 (0:00:00.333) 0:03:38.405 ***** 2026-02-05 00:55:30.589028 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.589032 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.589036 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.589039 | orchestrator | 2026-02-05 00:55:30.589043 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:55:30.589047 | orchestrator | Thursday 05 February 2026 00:48:23 +0000 (0:00:00.561) 0:03:38.967 ***** 2026-02-05 00:55:30.589051 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.589055 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.589058 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.589062 | orchestrator | 2026-02-05 00:55:30.589066 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:55:30.589070 | orchestrator | Thursday 05 February 2026 00:48:24 +0000 (0:00:00.299) 0:03:39.266 ***** 2026-02-05 00:55:30.589074 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589078 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.589082 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.589085 | orchestrator | 2026-02-05 00:55:30.589089 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:55:30.589093 | orchestrator | Thursday 05 February 2026 00:48:24 +0000 (0:00:00.717) 0:03:39.984 ***** 2026-02-05 00:55:30.589097 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.589101 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.589109 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.589112 | orchestrator | 2026-02-05 00:55:30.589116 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:55:30.589120 | orchestrator | Thursday 05 February 2026 00:48:25 +0000 (0:00:00.314) 0:03:40.298 ***** 2026-02-05 00:55:30.589137 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.589142 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.589145 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.589149 | orchestrator | 2026-02-05 00:55:30.589153 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:55:30.589157 | orchestrator | Thursday 05 February 2026 00:48:25 +0000 (0:00:00.489) 0:03:40.788 ***** 2026-02-05 00:55:30.589161 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589164 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.589168 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.589172 | orchestrator | 2026-02-05 00:55:30.589176 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:55:30.589179 | orchestrator | Thursday 05 February 2026 00:48:26 +0000 (0:00:00.748) 0:03:41.536 ***** 2026-02-05 00:55:30.589183 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589187 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.589191 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.589194 | orchestrator | 2026-02-05 00:55:30.589198 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:55:30.589202 | orchestrator | Thursday 05 February 2026 00:48:27 +0000 (0:00:00.721) 0:03:42.258 ***** 2026-02-05 00:55:30.589206 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.589209 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.589213 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.589217 | orchestrator | 2026-02-05 00:55:30.589220 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:55:30.589227 | orchestrator | Thursday 05 February 2026 00:48:27 +0000 (0:00:00.370) 0:03:42.628 ***** 2026-02-05 00:55:30.589231 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589235 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.589239 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.589242 | orchestrator | 2026-02-05 00:55:30.589246 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:55:30.589250 | orchestrator | Thursday 05 February 2026 00:48:27 +0000 (0:00:00.303) 0:03:42.932 ***** 2026-02-05 00:55:30.589254 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.589258 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.589261 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.589265 | orchestrator | 2026-02-05 00:55:30.589269 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:55:30.589272 | orchestrator | Thursday 05 February 2026 00:48:28 +0000 (0:00:00.520) 0:03:43.453 ***** 2026-02-05 00:55:30.589276 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.589280 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.589284 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.589287 | orchestrator | 2026-02-05 00:55:30.589291 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:55:30.589295 | orchestrator | Thursday 05 February 2026 00:48:28 +0000 (0:00:00.260) 0:03:43.713 ***** 2026-02-05 00:55:30.589298 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.589302 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.589306 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.589310 | orchestrator | 2026-02-05 00:55:30.589314 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:55:30.589318 | orchestrator | Thursday 05 February 2026 00:48:28 +0000 (0:00:00.269) 0:03:43.983 ***** 2026-02-05 00:55:30.589322 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.589325 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.589329 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.589337 | orchestrator | 2026-02-05 00:55:30.589341 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:55:30.589344 | orchestrator | Thursday 05 February 2026 00:48:29 +0000 (0:00:00.296) 0:03:44.280 ***** 2026-02-05 00:55:30.589348 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.589352 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.589355 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.589359 | orchestrator | 2026-02-05 00:55:30.589363 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:55:30.589367 | orchestrator | Thursday 05 February 2026 00:48:29 +0000 (0:00:00.462) 0:03:44.742 ***** 2026-02-05 00:55:30.589371 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589375 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.589378 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.589382 | orchestrator | 2026-02-05 00:55:30.589386 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:55:30.589390 | orchestrator | Thursday 05 February 2026 00:48:30 +0000 (0:00:00.360) 0:03:45.103 ***** 2026-02-05 00:55:30.589393 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589397 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.589401 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.589405 | orchestrator | 2026-02-05 00:55:30.589408 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:55:30.589412 | orchestrator | Thursday 05 February 2026 00:48:30 +0000 (0:00:00.402) 0:03:45.505 ***** 2026-02-05 00:55:30.589418 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589424 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.589430 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.589436 | orchestrator | 2026-02-05 00:55:30.589443 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-05 00:55:30.589449 | orchestrator | Thursday 05 February 2026 00:48:31 +0000 (0:00:00.960) 0:03:46.466 ***** 2026-02-05 00:55:30.589454 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589460 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.589466 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.589472 | orchestrator | 2026-02-05 00:55:30.589477 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-05 00:55:30.589483 | orchestrator | Thursday 05 February 2026 00:48:31 +0000 (0:00:00.379) 0:03:46.845 ***** 2026-02-05 00:55:30.589488 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.589494 | orchestrator | 2026-02-05 00:55:30.589500 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-05 00:55:30.589505 | orchestrator | Thursday 05 February 2026 00:48:32 +0000 (0:00:00.675) 0:03:47.520 ***** 2026-02-05 00:55:30.589511 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.589517 | orchestrator | 2026-02-05 00:55:30.589547 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-05 00:55:30.589555 | orchestrator | Thursday 05 February 2026 00:48:32 +0000 (0:00:00.165) 0:03:47.686 ***** 2026-02-05 00:55:30.589560 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 00:55:30.589566 | orchestrator | 2026-02-05 00:55:30.589572 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-05 00:55:30.589577 | orchestrator | Thursday 05 February 2026 00:48:33 +0000 (0:00:01.095) 0:03:48.782 ***** 2026-02-05 00:55:30.589583 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589590 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.589596 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.589602 | orchestrator | 2026-02-05 00:55:30.589608 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-05 00:55:30.589614 | orchestrator | Thursday 05 February 2026 00:48:34 +0000 (0:00:00.585) 0:03:49.367 ***** 2026-02-05 00:55:30.589620 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.589627 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589633 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.589661 | orchestrator | 2026-02-05 00:55:30.589668 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-05 00:55:30.589674 | orchestrator | Thursday 05 February 2026 00:48:34 +0000 (0:00:00.354) 0:03:49.722 ***** 2026-02-05 00:55:30.589680 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.589685 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.589691 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.589702 | orchestrator | 2026-02-05 00:55:30.589714 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-05 00:55:30.589720 | orchestrator | Thursday 05 February 2026 00:48:36 +0000 (0:00:01.297) 0:03:51.020 ***** 2026-02-05 00:55:30.589727 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.589733 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.589739 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.589745 | orchestrator | 2026-02-05 00:55:30.589748 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-05 00:55:30.589752 | orchestrator | Thursday 05 February 2026 00:48:36 +0000 (0:00:00.749) 0:03:51.769 ***** 2026-02-05 00:55:30.589756 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.589760 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.589763 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.589767 | orchestrator | 2026-02-05 00:55:30.589771 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-05 00:55:30.589775 | orchestrator | Thursday 05 February 2026 00:48:37 +0000 (0:00:00.901) 0:03:52.671 ***** 2026-02-05 00:55:30.589779 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589783 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.589786 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.589790 | orchestrator | 2026-02-05 00:55:30.589794 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-05 00:55:30.589798 | orchestrator | Thursday 05 February 2026 00:48:38 +0000 (0:00:00.673) 0:03:53.345 ***** 2026-02-05 00:55:30.589802 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.589806 | orchestrator | 2026-02-05 00:55:30.589809 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-05 00:55:30.589813 | orchestrator | Thursday 05 February 2026 00:48:39 +0000 (0:00:01.106) 0:03:54.451 ***** 2026-02-05 00:55:30.589817 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589821 | orchestrator | 2026-02-05 00:55:30.589824 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-05 00:55:30.589828 | orchestrator | Thursday 05 February 2026 00:48:40 +0000 (0:00:00.597) 0:03:55.048 ***** 2026-02-05 00:55:30.589832 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 00:55:30.589836 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:55:30.589840 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:55:30.589844 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:55:30.589848 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-05 00:55:30.589852 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:55:30.589856 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:55:30.589860 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-05 00:55:30.589864 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:55:30.589868 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-05 00:55:30.589872 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-05 00:55:30.589876 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-05 00:55:30.589879 | orchestrator | 2026-02-05 00:55:30.589883 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-05 00:55:30.589887 | orchestrator | Thursday 05 February 2026 00:48:43 +0000 (0:00:03.317) 0:03:58.366 ***** 2026-02-05 00:55:30.589891 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.589899 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.589904 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.589907 | orchestrator | 2026-02-05 00:55:30.589911 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-05 00:55:30.589915 | orchestrator | Thursday 05 February 2026 00:48:44 +0000 (0:00:01.404) 0:03:59.771 ***** 2026-02-05 00:55:30.589918 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589922 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.589926 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.589930 | orchestrator | 2026-02-05 00:55:30.589974 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-05 00:55:30.589979 | orchestrator | Thursday 05 February 2026 00:48:45 +0000 (0:00:00.353) 0:04:00.125 ***** 2026-02-05 00:55:30.589983 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.589987 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.589991 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.589994 | orchestrator | 2026-02-05 00:55:30.589998 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-05 00:55:30.590002 | orchestrator | Thursday 05 February 2026 00:48:45 +0000 (0:00:00.405) 0:04:00.530 ***** 2026-02-05 00:55:30.590006 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.590061 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.590067 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.590071 | orchestrator | 2026-02-05 00:55:30.590075 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-05 00:55:30.590079 | orchestrator | Thursday 05 February 2026 00:48:47 +0000 (0:00:02.143) 0:04:02.673 ***** 2026-02-05 00:55:30.590083 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.590087 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.590091 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.590094 | orchestrator | 2026-02-05 00:55:30.590098 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-05 00:55:30.590102 | orchestrator | Thursday 05 February 2026 00:48:51 +0000 (0:00:03.664) 0:04:06.338 ***** 2026-02-05 00:55:30.590106 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.590109 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.590113 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.590117 | orchestrator | 2026-02-05 00:55:30.590121 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-05 00:55:30.590124 | orchestrator | Thursday 05 February 2026 00:48:51 +0000 (0:00:00.557) 0:04:06.896 ***** 2026-02-05 00:55:30.590129 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.590133 | orchestrator | 2026-02-05 00:55:30.590136 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-05 00:55:30.590140 | orchestrator | Thursday 05 February 2026 00:48:52 +0000 (0:00:00.502) 0:04:07.399 ***** 2026-02-05 00:55:30.590144 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.590148 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.590152 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.590155 | orchestrator | 2026-02-05 00:55:30.590159 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-05 00:55:30.590163 | orchestrator | Thursday 05 February 2026 00:48:52 +0000 (0:00:00.316) 0:04:07.715 ***** 2026-02-05 00:55:30.590167 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.590171 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.590174 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.590178 | orchestrator | 2026-02-05 00:55:30.590182 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-05 00:55:30.590186 | orchestrator | Thursday 05 February 2026 00:48:53 +0000 (0:00:00.471) 0:04:08.187 ***** 2026-02-05 00:55:30.590189 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.590193 | orchestrator | 2026-02-05 00:55:30.590204 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-05 00:55:30.590208 | orchestrator | Thursday 05 February 2026 00:48:53 +0000 (0:00:00.505) 0:04:08.692 ***** 2026-02-05 00:55:30.590212 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.590216 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.590219 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.590223 | orchestrator | 2026-02-05 00:55:30.590227 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-05 00:55:30.590231 | orchestrator | Thursday 05 February 2026 00:48:55 +0000 (0:00:01.596) 0:04:10.288 ***** 2026-02-05 00:55:30.590235 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.590238 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.590242 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.590246 | orchestrator | 2026-02-05 00:55:30.590249 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-05 00:55:30.590253 | orchestrator | Thursday 05 February 2026 00:48:56 +0000 (0:00:01.361) 0:04:11.649 ***** 2026-02-05 00:55:30.590257 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.590261 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.590265 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.590269 | orchestrator | 2026-02-05 00:55:30.590272 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-05 00:55:30.590276 | orchestrator | Thursday 05 February 2026 00:48:58 +0000 (0:00:01.731) 0:04:13.380 ***** 2026-02-05 00:55:30.590280 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.590284 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.590287 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.590291 | orchestrator | 2026-02-05 00:55:30.590295 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-05 00:55:30.590299 | orchestrator | Thursday 05 February 2026 00:49:01 +0000 (0:00:02.767) 0:04:16.148 ***** 2026-02-05 00:55:30.590302 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.590306 | orchestrator | 2026-02-05 00:55:30.590310 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-05 00:55:30.590314 | orchestrator | Thursday 05 February 2026 00:49:01 +0000 (0:00:00.788) 0:04:16.936 ***** 2026-02-05 00:55:30.590317 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-05 00:55:30.590321 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.590325 | orchestrator | 2026-02-05 00:55:30.590329 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-05 00:55:30.590332 | orchestrator | Thursday 05 February 2026 00:49:23 +0000 (0:00:22.046) 0:04:38.983 ***** 2026-02-05 00:55:30.590336 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.590340 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.590344 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.590347 | orchestrator | 2026-02-05 00:55:30.590351 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-05 00:55:30.590355 | orchestrator | Thursday 05 February 2026 00:49:34 +0000 (0:00:10.294) 0:04:49.277 ***** 2026-02-05 00:55:30.590359 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.590362 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.590366 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.590370 | orchestrator | 2026-02-05 00:55:30.590374 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-05 00:55:30.590391 | orchestrator | Thursday 05 February 2026 00:49:34 +0000 (0:00:00.268) 0:04:49.546 ***** 2026-02-05 00:55:30.590397 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a08cce1c9b9c75efeaac3a0c8f6e086a4ff3c090'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-05 00:55:30.590456 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a08cce1c9b9c75efeaac3a0c8f6e086a4ff3c090'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-05 00:55:30.590479 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a08cce1c9b9c75efeaac3a0c8f6e086a4ff3c090'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-05 00:55:30.590487 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a08cce1c9b9c75efeaac3a0c8f6e086a4ff3c090'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-05 00:55:30.590494 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a08cce1c9b9c75efeaac3a0c8f6e086a4ff3c090'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-05 00:55:30.590501 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a08cce1c9b9c75efeaac3a0c8f6e086a4ff3c090'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__a08cce1c9b9c75efeaac3a0c8f6e086a4ff3c090'}])  2026-02-05 00:55:30.590509 | orchestrator | 2026-02-05 00:55:30.590515 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 00:55:30.590520 | orchestrator | Thursday 05 February 2026 00:49:49 +0000 (0:00:15.204) 0:05:04.750 ***** 2026-02-05 00:55:30.590526 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.590531 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.590536 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.590542 | orchestrator | 2026-02-05 00:55:30.590548 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-05 00:55:30.590553 | orchestrator | Thursday 05 February 2026 00:49:50 +0000 (0:00:00.335) 0:05:05.086 ***** 2026-02-05 00:55:30.590558 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.590564 | orchestrator | 2026-02-05 00:55:30.590569 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-05 00:55:30.590575 | orchestrator | Thursday 05 February 2026 00:49:50 +0000 (0:00:00.485) 0:05:05.572 ***** 2026-02-05 00:55:30.590580 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.590586 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.590591 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.590596 | orchestrator | 2026-02-05 00:55:30.590602 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-05 00:55:30.590608 | orchestrator | Thursday 05 February 2026 00:49:51 +0000 (0:00:00.446) 0:05:06.019 ***** 2026-02-05 00:55:30.590613 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.590619 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.590625 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.590631 | orchestrator | 2026-02-05 00:55:30.590637 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-05 00:55:30.590643 | orchestrator | Thursday 05 February 2026 00:49:51 +0000 (0:00:00.301) 0:05:06.320 ***** 2026-02-05 00:55:30.590698 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 00:55:30.590705 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 00:55:30.590711 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 00:55:30.590717 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.590724 | orchestrator | 2026-02-05 00:55:30.590728 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-05 00:55:30.590732 | orchestrator | Thursday 05 February 2026 00:49:51 +0000 (0:00:00.542) 0:05:06.863 ***** 2026-02-05 00:55:30.590736 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.590740 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.590771 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.590777 | orchestrator | 2026-02-05 00:55:30.590783 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-05 00:55:30.590789 | orchestrator | 2026-02-05 00:55:30.590794 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:55:30.590800 | orchestrator | Thursday 05 February 2026 00:49:52 +0000 (0:00:00.464) 0:05:07.327 ***** 2026-02-05 00:55:30.590805 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.590811 | orchestrator | 2026-02-05 00:55:30.590823 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:55:30.590830 | orchestrator | Thursday 05 February 2026 00:49:52 +0000 (0:00:00.574) 0:05:07.902 ***** 2026-02-05 00:55:30.590836 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.590843 | orchestrator | 2026-02-05 00:55:30.590849 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:55:30.590855 | orchestrator | Thursday 05 February 2026 00:49:53 +0000 (0:00:00.466) 0:05:08.368 ***** 2026-02-05 00:55:30.590861 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.590867 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.590873 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.590880 | orchestrator | 2026-02-05 00:55:30.590891 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:55:30.590898 | orchestrator | Thursday 05 February 2026 00:49:54 +0000 (0:00:00.799) 0:05:09.168 ***** 2026-02-05 00:55:30.590904 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.590911 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.590917 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.590924 | orchestrator | 2026-02-05 00:55:30.590930 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:55:30.590937 | orchestrator | Thursday 05 February 2026 00:49:54 +0000 (0:00:00.289) 0:05:09.457 ***** 2026-02-05 00:55:30.590944 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.590950 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.590954 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.590957 | orchestrator | 2026-02-05 00:55:30.590961 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:55:30.590965 | orchestrator | Thursday 05 February 2026 00:49:54 +0000 (0:00:00.265) 0:05:09.722 ***** 2026-02-05 00:55:30.590968 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.590972 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.590976 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.590982 | orchestrator | 2026-02-05 00:55:30.590988 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:55:30.590993 | orchestrator | Thursday 05 February 2026 00:49:54 +0000 (0:00:00.267) 0:05:09.990 ***** 2026-02-05 00:55:30.590999 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.591005 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.591011 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.591017 | orchestrator | 2026-02-05 00:55:30.591023 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:55:30.591034 | orchestrator | Thursday 05 February 2026 00:49:55 +0000 (0:00:00.789) 0:05:10.779 ***** 2026-02-05 00:55:30.591040 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.591045 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.591051 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.591057 | orchestrator | 2026-02-05 00:55:30.591063 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:55:30.591069 | orchestrator | Thursday 05 February 2026 00:49:56 +0000 (0:00:00.461) 0:05:11.241 ***** 2026-02-05 00:55:30.591074 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.591080 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.591087 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.591094 | orchestrator | 2026-02-05 00:55:30.591101 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:55:30.591106 | orchestrator | Thursday 05 February 2026 00:49:56 +0000 (0:00:00.257) 0:05:11.498 ***** 2026-02-05 00:55:30.591111 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.591117 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.591123 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.591128 | orchestrator | 2026-02-05 00:55:30.591134 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:55:30.591139 | orchestrator | Thursday 05 February 2026 00:49:57 +0000 (0:00:00.696) 0:05:12.195 ***** 2026-02-05 00:55:30.591145 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.591151 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.591156 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.591162 | orchestrator | 2026-02-05 00:55:30.591169 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:55:30.591175 | orchestrator | Thursday 05 February 2026 00:49:57 +0000 (0:00:00.704) 0:05:12.899 ***** 2026-02-05 00:55:30.591181 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.591187 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.591192 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.591198 | orchestrator | 2026-02-05 00:55:30.591204 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:55:30.591210 | orchestrator | Thursday 05 February 2026 00:49:58 +0000 (0:00:00.437) 0:05:13.337 ***** 2026-02-05 00:55:30.591218 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.591222 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.591226 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.591230 | orchestrator | 2026-02-05 00:55:30.591234 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:55:30.591237 | orchestrator | Thursday 05 February 2026 00:49:58 +0000 (0:00:00.271) 0:05:13.608 ***** 2026-02-05 00:55:30.591241 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.591245 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.591249 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.591252 | orchestrator | 2026-02-05 00:55:30.591256 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:55:30.591287 | orchestrator | Thursday 05 February 2026 00:49:58 +0000 (0:00:00.267) 0:05:13.875 ***** 2026-02-05 00:55:30.591291 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.591295 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.591299 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.591302 | orchestrator | 2026-02-05 00:55:30.591306 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:55:30.591310 | orchestrator | Thursday 05 February 2026 00:49:59 +0000 (0:00:00.264) 0:05:14.140 ***** 2026-02-05 00:55:30.591314 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.591318 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.591321 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.591325 | orchestrator | 2026-02-05 00:55:30.591332 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:55:30.591338 | orchestrator | Thursday 05 February 2026 00:49:59 +0000 (0:00:00.440) 0:05:14.581 ***** 2026-02-05 00:55:30.591350 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.591356 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.591363 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.591368 | orchestrator | 2026-02-05 00:55:30.591375 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:55:30.591381 | orchestrator | Thursday 05 February 2026 00:49:59 +0000 (0:00:00.260) 0:05:14.841 ***** 2026-02-05 00:55:30.591387 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.591397 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.591403 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.591410 | orchestrator | 2026-02-05 00:55:30.591416 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:55:30.591427 | orchestrator | Thursday 05 February 2026 00:50:00 +0000 (0:00:00.274) 0:05:15.116 ***** 2026-02-05 00:55:30.591434 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.591441 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.591448 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.591454 | orchestrator | 2026-02-05 00:55:30.591461 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:55:30.591467 | orchestrator | Thursday 05 February 2026 00:50:00 +0000 (0:00:00.331) 0:05:15.447 ***** 2026-02-05 00:55:30.591474 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.591480 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.591488 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.591492 | orchestrator | 2026-02-05 00:55:30.591496 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:55:30.591499 | orchestrator | Thursday 05 February 2026 00:50:00 +0000 (0:00:00.450) 0:05:15.897 ***** 2026-02-05 00:55:30.591503 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.591507 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.591510 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.591514 | orchestrator | 2026-02-05 00:55:30.591519 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-05 00:55:30.591525 | orchestrator | Thursday 05 February 2026 00:50:01 +0000 (0:00:00.475) 0:05:16.373 ***** 2026-02-05 00:55:30.591531 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 00:55:30.591537 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:55:30.591544 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:55:30.591550 | orchestrator | 2026-02-05 00:55:30.591555 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-05 00:55:30.591562 | orchestrator | Thursday 05 February 2026 00:50:01 +0000 (0:00:00.581) 0:05:16.955 ***** 2026-02-05 00:55:30.591569 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.591575 | orchestrator | 2026-02-05 00:55:30.591581 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-05 00:55:30.591588 | orchestrator | Thursday 05 February 2026 00:50:02 +0000 (0:00:00.598) 0:05:17.553 ***** 2026-02-05 00:55:30.591594 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.591598 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.591601 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.591605 | orchestrator | 2026-02-05 00:55:30.591609 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-05 00:55:30.591613 | orchestrator | Thursday 05 February 2026 00:50:03 +0000 (0:00:00.701) 0:05:18.254 ***** 2026-02-05 00:55:30.591616 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.591620 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.591624 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.591631 | orchestrator | 2026-02-05 00:55:30.591637 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-05 00:55:30.591643 | orchestrator | Thursday 05 February 2026 00:50:03 +0000 (0:00:00.276) 0:05:18.531 ***** 2026-02-05 00:55:30.591675 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 00:55:30.591682 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 00:55:30.591688 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 00:55:30.591693 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-05 00:55:30.591700 | orchestrator | 2026-02-05 00:55:30.591706 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-05 00:55:30.591713 | orchestrator | Thursday 05 February 2026 00:50:13 +0000 (0:00:10.157) 0:05:28.689 ***** 2026-02-05 00:55:30.591719 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.591725 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.591731 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.591737 | orchestrator | 2026-02-05 00:55:30.591743 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-05 00:55:30.591749 | orchestrator | Thursday 05 February 2026 00:50:13 +0000 (0:00:00.309) 0:05:28.998 ***** 2026-02-05 00:55:30.591756 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-05 00:55:30.591762 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 00:55:30.591768 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 00:55:30.591774 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-05 00:55:30.591780 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:55:30.591819 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:55:30.591827 | orchestrator | 2026-02-05 00:55:30.591833 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-05 00:55:30.591837 | orchestrator | Thursday 05 February 2026 00:50:16 +0000 (0:00:02.404) 0:05:31.403 ***** 2026-02-05 00:55:30.591841 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-05 00:55:30.591845 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 00:55:30.591848 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 00:55:30.591852 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 00:55:30.591856 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-05 00:55:30.591860 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-05 00:55:30.591864 | orchestrator | 2026-02-05 00:55:30.591868 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-05 00:55:30.591871 | orchestrator | Thursday 05 February 2026 00:50:17 +0000 (0:00:01.132) 0:05:32.535 ***** 2026-02-05 00:55:30.591875 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.591879 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.591883 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.591887 | orchestrator | 2026-02-05 00:55:30.591891 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-05 00:55:30.591894 | orchestrator | Thursday 05 February 2026 00:50:18 +0000 (0:00:00.671) 0:05:33.207 ***** 2026-02-05 00:55:30.591898 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.591902 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.591911 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.591915 | orchestrator | 2026-02-05 00:55:30.591918 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-05 00:55:30.591922 | orchestrator | Thursday 05 February 2026 00:50:18 +0000 (0:00:00.295) 0:05:33.503 ***** 2026-02-05 00:55:30.591926 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.591930 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.591933 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.591937 | orchestrator | 2026-02-05 00:55:30.591941 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-05 00:55:30.591945 | orchestrator | Thursday 05 February 2026 00:50:18 +0000 (0:00:00.266) 0:05:33.770 ***** 2026-02-05 00:55:30.591949 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.591959 | orchestrator | 2026-02-05 00:55:30.591962 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-05 00:55:30.591966 | orchestrator | Thursday 05 February 2026 00:50:19 +0000 (0:00:00.635) 0:05:34.405 ***** 2026-02-05 00:55:30.591970 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.591974 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.591977 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.591981 | orchestrator | 2026-02-05 00:55:30.591985 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-05 00:55:30.591989 | orchestrator | Thursday 05 February 2026 00:50:19 +0000 (0:00:00.288) 0:05:34.694 ***** 2026-02-05 00:55:30.591992 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.591996 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.592000 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.592004 | orchestrator | 2026-02-05 00:55:30.592007 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-05 00:55:30.592011 | orchestrator | Thursday 05 February 2026 00:50:19 +0000 (0:00:00.269) 0:05:34.963 ***** 2026-02-05 00:55:30.592015 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.592019 | orchestrator | 2026-02-05 00:55:30.592023 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-05 00:55:30.592026 | orchestrator | Thursday 05 February 2026 00:50:20 +0000 (0:00:00.646) 0:05:35.610 ***** 2026-02-05 00:55:30.592033 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.592039 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.592044 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.592053 | orchestrator | 2026-02-05 00:55:30.592062 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-05 00:55:30.592067 | orchestrator | Thursday 05 February 2026 00:50:21 +0000 (0:00:01.150) 0:05:36.761 ***** 2026-02-05 00:55:30.592073 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.592080 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.592086 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.592091 | orchestrator | 2026-02-05 00:55:30.592097 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-05 00:55:30.592104 | orchestrator | Thursday 05 February 2026 00:50:22 +0000 (0:00:01.076) 0:05:37.838 ***** 2026-02-05 00:55:30.592109 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.592114 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.592120 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.592125 | orchestrator | 2026-02-05 00:55:30.592130 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-05 00:55:30.592136 | orchestrator | Thursday 05 February 2026 00:50:24 +0000 (0:00:01.917) 0:05:39.755 ***** 2026-02-05 00:55:30.592142 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.592147 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.592153 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.592158 | orchestrator | 2026-02-05 00:55:30.592164 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-05 00:55:30.592170 | orchestrator | Thursday 05 February 2026 00:50:26 +0000 (0:00:02.109) 0:05:41.865 ***** 2026-02-05 00:55:30.592176 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.592181 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.592187 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-05 00:55:30.592192 | orchestrator | 2026-02-05 00:55:30.592197 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-05 00:55:30.592202 | orchestrator | Thursday 05 February 2026 00:50:27 +0000 (0:00:00.340) 0:05:42.205 ***** 2026-02-05 00:55:30.592235 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-05 00:55:30.592242 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-05 00:55:30.592256 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-05 00:55:30.592262 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-05 00:55:30.592268 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-05 00:55:30.592274 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:55:30.592279 | orchestrator | 2026-02-05 00:55:30.592285 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-05 00:55:30.592291 | orchestrator | Thursday 05 February 2026 00:50:57 +0000 (0:00:30.432) 0:06:12.638 ***** 2026-02-05 00:55:30.592297 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:55:30.592304 | orchestrator | 2026-02-05 00:55:30.592309 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-05 00:55:30.592315 | orchestrator | Thursday 05 February 2026 00:50:58 +0000 (0:00:01.296) 0:06:13.934 ***** 2026-02-05 00:55:30.592320 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.592326 | orchestrator | 2026-02-05 00:55:30.592338 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-05 00:55:30.592344 | orchestrator | Thursday 05 February 2026 00:50:59 +0000 (0:00:00.262) 0:06:14.196 ***** 2026-02-05 00:55:30.592350 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.592356 | orchestrator | 2026-02-05 00:55:30.592362 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-05 00:55:30.592368 | orchestrator | Thursday 05 February 2026 00:50:59 +0000 (0:00:00.274) 0:06:14.471 ***** 2026-02-05 00:55:30.592374 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-05 00:55:30.592379 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-05 00:55:30.592385 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-05 00:55:30.592391 | orchestrator | 2026-02-05 00:55:30.592396 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-05 00:55:30.592402 | orchestrator | Thursday 05 February 2026 00:51:05 +0000 (0:00:06.349) 0:06:20.821 ***** 2026-02-05 00:55:30.592408 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-05 00:55:30.592414 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-05 00:55:30.592420 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-05 00:55:30.592426 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-05 00:55:30.592432 | orchestrator | 2026-02-05 00:55:30.592437 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 00:55:30.592441 | orchestrator | Thursday 05 February 2026 00:51:10 +0000 (0:00:04.749) 0:06:25.571 ***** 2026-02-05 00:55:30.592445 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.592449 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.592452 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.592456 | orchestrator | 2026-02-05 00:55:30.592460 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-05 00:55:30.592464 | orchestrator | Thursday 05 February 2026 00:51:11 +0000 (0:00:00.602) 0:06:26.174 ***** 2026-02-05 00:55:30.592467 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.592471 | orchestrator | 2026-02-05 00:55:30.592475 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-05 00:55:30.592479 | orchestrator | Thursday 05 February 2026 00:51:11 +0000 (0:00:00.641) 0:06:26.815 ***** 2026-02-05 00:55:30.592482 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.592486 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.592490 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.592494 | orchestrator | 2026-02-05 00:55:30.592503 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-05 00:55:30.592506 | orchestrator | Thursday 05 February 2026 00:51:12 +0000 (0:00:00.273) 0:06:27.089 ***** 2026-02-05 00:55:30.592510 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.592514 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.592518 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.592521 | orchestrator | 2026-02-05 00:55:30.592525 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-05 00:55:30.592529 | orchestrator | Thursday 05 February 2026 00:51:13 +0000 (0:00:01.185) 0:06:28.275 ***** 2026-02-05 00:55:30.592533 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 00:55:30.592537 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 00:55:30.592540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 00:55:30.592544 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.592548 | orchestrator | 2026-02-05 00:55:30.592551 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-05 00:55:30.592555 | orchestrator | Thursday 05 February 2026 00:51:13 +0000 (0:00:00.734) 0:06:29.009 ***** 2026-02-05 00:55:30.592559 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.592563 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.592567 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.592570 | orchestrator | 2026-02-05 00:55:30.592574 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-05 00:55:30.592578 | orchestrator | 2026-02-05 00:55:30.592582 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:55:30.592585 | orchestrator | Thursday 05 February 2026 00:51:14 +0000 (0:00:00.668) 0:06:29.678 ***** 2026-02-05 00:55:30.592612 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.592617 | orchestrator | 2026-02-05 00:55:30.592621 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:55:30.592625 | orchestrator | Thursday 05 February 2026 00:51:15 +0000 (0:00:00.466) 0:06:30.144 ***** 2026-02-05 00:55:30.592629 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.592632 | orchestrator | 2026-02-05 00:55:30.592636 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:55:30.592640 | orchestrator | Thursday 05 February 2026 00:51:15 +0000 (0:00:00.611) 0:06:30.756 ***** 2026-02-05 00:55:30.592644 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.592689 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.592693 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.592697 | orchestrator | 2026-02-05 00:55:30.592701 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:55:30.592705 | orchestrator | Thursday 05 February 2026 00:51:16 +0000 (0:00:00.273) 0:06:31.029 ***** 2026-02-05 00:55:30.592708 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.592712 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.592716 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.592720 | orchestrator | 2026-02-05 00:55:30.592723 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:55:30.592731 | orchestrator | Thursday 05 February 2026 00:51:16 +0000 (0:00:00.641) 0:06:31.670 ***** 2026-02-05 00:55:30.592735 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.592739 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.592743 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.592746 | orchestrator | 2026-02-05 00:55:30.592750 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:55:30.592754 | orchestrator | Thursday 05 February 2026 00:51:17 +0000 (0:00:00.906) 0:06:32.577 ***** 2026-02-05 00:55:30.592758 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.592762 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.592773 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.592777 | orchestrator | 2026-02-05 00:55:30.592781 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:55:30.592785 | orchestrator | Thursday 05 February 2026 00:51:18 +0000 (0:00:00.759) 0:06:33.336 ***** 2026-02-05 00:55:30.592789 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.592793 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.592797 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.592800 | orchestrator | 2026-02-05 00:55:30.592804 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:55:30.592808 | orchestrator | Thursday 05 February 2026 00:51:18 +0000 (0:00:00.282) 0:06:33.618 ***** 2026-02-05 00:55:30.592812 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.592816 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.592819 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.592823 | orchestrator | 2026-02-05 00:55:30.592827 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:55:30.592831 | orchestrator | Thursday 05 February 2026 00:51:18 +0000 (0:00:00.254) 0:06:33.873 ***** 2026-02-05 00:55:30.592835 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.592838 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.592842 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.592846 | orchestrator | 2026-02-05 00:55:30.592850 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:55:30.592854 | orchestrator | Thursday 05 February 2026 00:51:19 +0000 (0:00:00.461) 0:06:34.334 ***** 2026-02-05 00:55:30.592858 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.592861 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.592865 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.592869 | orchestrator | 2026-02-05 00:55:30.592873 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:55:30.592877 | orchestrator | Thursday 05 February 2026 00:51:20 +0000 (0:00:00.687) 0:06:35.021 ***** 2026-02-05 00:55:30.592880 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.592884 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.592888 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.592892 | orchestrator | 2026-02-05 00:55:30.592895 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:55:30.592899 | orchestrator | Thursday 05 February 2026 00:51:20 +0000 (0:00:00.694) 0:06:35.716 ***** 2026-02-05 00:55:30.592903 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.592907 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.592911 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.592914 | orchestrator | 2026-02-05 00:55:30.592918 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:55:30.592922 | orchestrator | Thursday 05 February 2026 00:51:20 +0000 (0:00:00.287) 0:06:36.004 ***** 2026-02-05 00:55:30.592926 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.592930 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.592934 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.592937 | orchestrator | 2026-02-05 00:55:30.592941 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:55:30.592945 | orchestrator | Thursday 05 February 2026 00:51:21 +0000 (0:00:00.450) 0:06:36.455 ***** 2026-02-05 00:55:30.592949 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.592952 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.592956 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.592960 | orchestrator | 2026-02-05 00:55:30.592964 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:55:30.592967 | orchestrator | Thursday 05 February 2026 00:51:21 +0000 (0:00:00.287) 0:06:36.743 ***** 2026-02-05 00:55:30.592971 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.592975 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.592979 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.592983 | orchestrator | 2026-02-05 00:55:30.592986 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:55:30.592994 | orchestrator | Thursday 05 February 2026 00:51:22 +0000 (0:00:00.269) 0:06:37.012 ***** 2026-02-05 00:55:30.592997 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.593001 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.593008 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.593012 | orchestrator | 2026-02-05 00:55:30.593016 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:55:30.593020 | orchestrator | Thursday 05 February 2026 00:51:22 +0000 (0:00:00.287) 0:06:37.299 ***** 2026-02-05 00:55:30.593024 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.593028 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.593031 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.593035 | orchestrator | 2026-02-05 00:55:30.593039 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:55:30.593043 | orchestrator | Thursday 05 February 2026 00:51:22 +0000 (0:00:00.401) 0:06:37.700 ***** 2026-02-05 00:55:30.593046 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.593050 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.593054 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.593057 | orchestrator | 2026-02-05 00:55:30.593061 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:55:30.593065 | orchestrator | Thursday 05 February 2026 00:51:22 +0000 (0:00:00.264) 0:06:37.965 ***** 2026-02-05 00:55:30.593069 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.593072 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.593076 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.593080 | orchestrator | 2026-02-05 00:55:30.593084 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:55:30.593087 | orchestrator | Thursday 05 February 2026 00:51:23 +0000 (0:00:00.260) 0:06:38.225 ***** 2026-02-05 00:55:30.593094 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.593098 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.593102 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.593105 | orchestrator | 2026-02-05 00:55:30.593109 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:55:30.593113 | orchestrator | Thursday 05 February 2026 00:51:23 +0000 (0:00:00.270) 0:06:38.495 ***** 2026-02-05 00:55:30.593117 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.593121 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.593124 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.593128 | orchestrator | 2026-02-05 00:55:30.593132 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-05 00:55:30.593136 | orchestrator | Thursday 05 February 2026 00:51:24 +0000 (0:00:00.619) 0:06:39.115 ***** 2026-02-05 00:55:30.593139 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.593143 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.593147 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.593151 | orchestrator | 2026-02-05 00:55:30.593154 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-05 00:55:30.593158 | orchestrator | Thursday 05 February 2026 00:51:24 +0000 (0:00:00.272) 0:06:39.387 ***** 2026-02-05 00:55:30.593162 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:55:30.593166 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:55:30.593170 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:55:30.593173 | orchestrator | 2026-02-05 00:55:30.593177 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-05 00:55:30.593181 | orchestrator | Thursday 05 February 2026 00:51:25 +0000 (0:00:00.715) 0:06:40.103 ***** 2026-02-05 00:55:30.593185 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.593188 | orchestrator | 2026-02-05 00:55:30.593192 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-05 00:55:30.593203 | orchestrator | Thursday 05 February 2026 00:51:25 +0000 (0:00:00.594) 0:06:40.697 ***** 2026-02-05 00:55:30.593207 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.593211 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.593214 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.593218 | orchestrator | 2026-02-05 00:55:30.593222 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-05 00:55:30.593226 | orchestrator | Thursday 05 February 2026 00:51:25 +0000 (0:00:00.259) 0:06:40.956 ***** 2026-02-05 00:55:30.593229 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.593233 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.593237 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.593240 | orchestrator | 2026-02-05 00:55:30.593244 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-05 00:55:30.593248 | orchestrator | Thursday 05 February 2026 00:51:26 +0000 (0:00:00.262) 0:06:41.219 ***** 2026-02-05 00:55:30.593252 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.593256 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.593259 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.593263 | orchestrator | 2026-02-05 00:55:30.593267 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-05 00:55:30.593271 | orchestrator | Thursday 05 February 2026 00:51:26 +0000 (0:00:00.594) 0:06:41.813 ***** 2026-02-05 00:55:30.593275 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.593278 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.593282 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.593286 | orchestrator | 2026-02-05 00:55:30.593289 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-05 00:55:30.593293 | orchestrator | Thursday 05 February 2026 00:51:27 +0000 (0:00:00.467) 0:06:42.281 ***** 2026-02-05 00:55:30.593297 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-05 00:55:30.593301 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-05 00:55:30.593305 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-05 00:55:30.593308 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-05 00:55:30.593312 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-05 00:55:30.593321 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-05 00:55:30.593325 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-05 00:55:30.593328 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-05 00:55:30.593332 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-05 00:55:30.593336 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-05 00:55:30.593340 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-05 00:55:30.593343 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-05 00:55:30.593347 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-05 00:55:30.593351 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-05 00:55:30.593355 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-05 00:55:30.593358 | orchestrator | 2026-02-05 00:55:30.593362 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-05 00:55:30.593369 | orchestrator | Thursday 05 February 2026 00:51:30 +0000 (0:00:03.276) 0:06:45.557 ***** 2026-02-05 00:55:30.593372 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.593380 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.593384 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.593387 | orchestrator | 2026-02-05 00:55:30.593391 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-05 00:55:30.593395 | orchestrator | Thursday 05 February 2026 00:51:30 +0000 (0:00:00.258) 0:06:45.815 ***** 2026-02-05 00:55:30.593399 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.593405 | orchestrator | 2026-02-05 00:55:30.593411 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-05 00:55:30.593417 | orchestrator | Thursday 05 February 2026 00:51:31 +0000 (0:00:00.675) 0:06:46.490 ***** 2026-02-05 00:55:30.593423 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-05 00:55:30.593430 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-05 00:55:30.593437 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-05 00:55:30.593443 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-05 00:55:30.593450 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-05 00:55:30.593456 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-05 00:55:30.593462 | orchestrator | 2026-02-05 00:55:30.593468 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-05 00:55:30.593473 | orchestrator | Thursday 05 February 2026 00:51:32 +0000 (0:00:01.056) 0:06:47.547 ***** 2026-02-05 00:55:30.593479 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:55:30.593486 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 00:55:30.593493 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:55:30.593499 | orchestrator | 2026-02-05 00:55:30.593505 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-05 00:55:30.593511 | orchestrator | Thursday 05 February 2026 00:51:34 +0000 (0:00:02.197) 0:06:49.744 ***** 2026-02-05 00:55:30.593518 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 00:55:30.593524 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 00:55:30.593530 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.593536 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 00:55:30.593543 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 00:55:30.593549 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.593555 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 00:55:30.593562 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 00:55:30.593568 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.593574 | orchestrator | 2026-02-05 00:55:30.593581 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-05 00:55:30.593587 | orchestrator | Thursday 05 February 2026 00:51:35 +0000 (0:00:01.199) 0:06:50.943 ***** 2026-02-05 00:55:30.593593 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:55:30.593599 | orchestrator | 2026-02-05 00:55:30.593606 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-05 00:55:30.593612 | orchestrator | Thursday 05 February 2026 00:51:38 +0000 (0:00:02.094) 0:06:53.037 ***** 2026-02-05 00:55:30.593619 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.593625 | orchestrator | 2026-02-05 00:55:30.593629 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-05 00:55:30.593633 | orchestrator | Thursday 05 February 2026 00:51:38 +0000 (0:00:00.611) 0:06:53.649 ***** 2026-02-05 00:55:30.593637 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-159372f8-6c52-51f3-a9af-3fbf7ffb45fe', 'data_vg': 'ceph-159372f8-6c52-51f3-a9af-3fbf7ffb45fe'}) 2026-02-05 00:55:30.593642 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3e842383-5890-511f-b982-bff6d8042060', 'data_vg': 'ceph-3e842383-5890-511f-b982-bff6d8042060'}) 2026-02-05 00:55:30.593673 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3edfc207-63bb-5e8f-b635-306c655bc02c', 'data_vg': 'ceph-3edfc207-63bb-5e8f-b635-306c655bc02c'}) 2026-02-05 00:55:30.593680 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-22ded513-57d8-573e-a796-c8381d672537', 'data_vg': 'ceph-22ded513-57d8-573e-a796-c8381d672537'}) 2026-02-05 00:55:30.593686 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-523b4628-8322-5ebe-8cc3-60a2eeaa41a5', 'data_vg': 'ceph-523b4628-8322-5ebe-8cc3-60a2eeaa41a5'}) 2026-02-05 00:55:30.593693 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-121c279b-9e45-54e8-9359-e1d452607edd', 'data_vg': 'ceph-121c279b-9e45-54e8-9359-e1d452607edd'}) 2026-02-05 00:55:30.593699 | orchestrator | 2026-02-05 00:55:30.593705 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-05 00:55:30.593710 | orchestrator | Thursday 05 February 2026 00:52:17 +0000 (0:00:39.011) 0:07:32.660 ***** 2026-02-05 00:55:30.593715 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.593721 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.593727 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.593734 | orchestrator | 2026-02-05 00:55:30.593739 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-05 00:55:30.593745 | orchestrator | Thursday 05 February 2026 00:52:17 +0000 (0:00:00.306) 0:07:32.967 ***** 2026-02-05 00:55:30.593756 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.593763 | orchestrator | 2026-02-05 00:55:30.593769 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-05 00:55:30.593775 | orchestrator | Thursday 05 February 2026 00:52:18 +0000 (0:00:00.810) 0:07:33.778 ***** 2026-02-05 00:55:30.593781 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.593787 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.593794 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.593799 | orchestrator | 2026-02-05 00:55:30.593802 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-05 00:55:30.593806 | orchestrator | Thursday 05 February 2026 00:52:19 +0000 (0:00:00.626) 0:07:34.404 ***** 2026-02-05 00:55:30.593810 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.593813 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.593817 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.593821 | orchestrator | 2026-02-05 00:55:30.593825 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-05 00:55:30.593828 | orchestrator | Thursday 05 February 2026 00:52:21 +0000 (0:00:02.359) 0:07:36.763 ***** 2026-02-05 00:55:30.593832 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.593836 | orchestrator | 2026-02-05 00:55:30.593840 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-05 00:55:30.593843 | orchestrator | Thursday 05 February 2026 00:52:22 +0000 (0:00:00.745) 0:07:37.509 ***** 2026-02-05 00:55:30.593847 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.593851 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.593855 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.593858 | orchestrator | 2026-02-05 00:55:30.593862 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-05 00:55:30.593866 | orchestrator | Thursday 05 February 2026 00:52:23 +0000 (0:00:01.123) 0:07:38.632 ***** 2026-02-05 00:55:30.593870 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.593873 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.593877 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.593881 | orchestrator | 2026-02-05 00:55:30.593884 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-05 00:55:30.593888 | orchestrator | Thursday 05 February 2026 00:52:24 +0000 (0:00:01.054) 0:07:39.687 ***** 2026-02-05 00:55:30.593897 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.593900 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.593904 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.593908 | orchestrator | 2026-02-05 00:55:30.593911 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-05 00:55:30.593915 | orchestrator | Thursday 05 February 2026 00:52:26 +0000 (0:00:01.997) 0:07:41.685 ***** 2026-02-05 00:55:30.593919 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.593923 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.593926 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.593930 | orchestrator | 2026-02-05 00:55:30.593934 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-05 00:55:30.593937 | orchestrator | Thursday 05 February 2026 00:52:26 +0000 (0:00:00.287) 0:07:41.973 ***** 2026-02-05 00:55:30.593941 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.593945 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.593949 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.593952 | orchestrator | 2026-02-05 00:55:30.593956 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-05 00:55:30.593960 | orchestrator | Thursday 05 February 2026 00:52:27 +0000 (0:00:00.297) 0:07:42.270 ***** 2026-02-05 00:55:30.593964 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-02-05 00:55:30.593967 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-05 00:55:30.593971 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-02-05 00:55:30.593975 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 00:55:30.593979 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-02-05 00:55:30.593982 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-02-05 00:55:30.593986 | orchestrator | 2026-02-05 00:55:30.593990 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-05 00:55:30.593993 | orchestrator | Thursday 05 February 2026 00:52:28 +0000 (0:00:00.998) 0:07:43.269 ***** 2026-02-05 00:55:30.593997 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-05 00:55:30.594001 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-05 00:55:30.594005 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-05 00:55:30.594008 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-05 00:55:30.594038 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-02-05 00:55:30.594047 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-05 00:55:30.594051 | orchestrator | 2026-02-05 00:55:30.594055 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-05 00:55:30.594058 | orchestrator | Thursday 05 February 2026 00:52:30 +0000 (0:00:02.384) 0:07:45.653 ***** 2026-02-05 00:55:30.594062 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-05 00:55:30.594066 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-05 00:55:30.594070 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-05 00:55:30.594073 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-05 00:55:30.594077 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-05 00:55:30.594081 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-02-05 00:55:30.594085 | orchestrator | 2026-02-05 00:55:30.594088 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-05 00:55:30.594092 | orchestrator | Thursday 05 February 2026 00:52:34 +0000 (0:00:04.068) 0:07:49.722 ***** 2026-02-05 00:55:30.594096 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594100 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.594103 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:55:30.594107 | orchestrator | 2026-02-05 00:55:30.594111 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-05 00:55:30.594115 | orchestrator | Thursday 05 February 2026 00:52:36 +0000 (0:00:02.227) 0:07:51.949 ***** 2026-02-05 00:55:30.594118 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594129 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.594133 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-05 00:55:30.594137 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:55:30.594141 | orchestrator | 2026-02-05 00:55:30.594144 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-05 00:55:30.594148 | orchestrator | Thursday 05 February 2026 00:52:49 +0000 (0:00:12.562) 0:08:04.512 ***** 2026-02-05 00:55:30.594152 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594155 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.594159 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.594163 | orchestrator | 2026-02-05 00:55:30.594167 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 00:55:30.594170 | orchestrator | Thursday 05 February 2026 00:52:50 +0000 (0:00:01.298) 0:08:05.810 ***** 2026-02-05 00:55:30.594174 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594178 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.594182 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.594185 | orchestrator | 2026-02-05 00:55:30.594189 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-05 00:55:30.594193 | orchestrator | Thursday 05 February 2026 00:52:51 +0000 (0:00:00.336) 0:08:06.146 ***** 2026-02-05 00:55:30.594196 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.594200 | orchestrator | 2026-02-05 00:55:30.594204 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-05 00:55:30.594208 | orchestrator | Thursday 05 February 2026 00:52:51 +0000 (0:00:00.751) 0:08:06.897 ***** 2026-02-05 00:55:30.594211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.594215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.594219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.594223 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594226 | orchestrator | 2026-02-05 00:55:30.594230 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-05 00:55:30.594234 | orchestrator | Thursday 05 February 2026 00:52:52 +0000 (0:00:00.377) 0:08:07.275 ***** 2026-02-05 00:55:30.594238 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594241 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.594245 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.594249 | orchestrator | 2026-02-05 00:55:30.594252 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-05 00:55:30.594256 | orchestrator | Thursday 05 February 2026 00:52:52 +0000 (0:00:00.333) 0:08:07.609 ***** 2026-02-05 00:55:30.594260 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594264 | orchestrator | 2026-02-05 00:55:30.594267 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-05 00:55:30.594271 | orchestrator | Thursday 05 February 2026 00:52:52 +0000 (0:00:00.215) 0:08:07.825 ***** 2026-02-05 00:55:30.594275 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594278 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.594282 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.594286 | orchestrator | 2026-02-05 00:55:30.594290 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-05 00:55:30.594293 | orchestrator | Thursday 05 February 2026 00:52:53 +0000 (0:00:00.291) 0:08:08.116 ***** 2026-02-05 00:55:30.594297 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594301 | orchestrator | 2026-02-05 00:55:30.594305 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-05 00:55:30.594308 | orchestrator | Thursday 05 February 2026 00:52:53 +0000 (0:00:00.733) 0:08:08.850 ***** 2026-02-05 00:55:30.594312 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594316 | orchestrator | 2026-02-05 00:55:30.594319 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-05 00:55:30.594326 | orchestrator | Thursday 05 February 2026 00:52:54 +0000 (0:00:00.232) 0:08:09.082 ***** 2026-02-05 00:55:30.594330 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594334 | orchestrator | 2026-02-05 00:55:30.594337 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-05 00:55:30.594341 | orchestrator | Thursday 05 February 2026 00:52:54 +0000 (0:00:00.123) 0:08:09.205 ***** 2026-02-05 00:55:30.594345 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594349 | orchestrator | 2026-02-05 00:55:30.594355 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-05 00:55:30.594359 | orchestrator | Thursday 05 February 2026 00:52:54 +0000 (0:00:00.209) 0:08:09.415 ***** 2026-02-05 00:55:30.594362 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594366 | orchestrator | 2026-02-05 00:55:30.594370 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-05 00:55:30.594373 | orchestrator | Thursday 05 February 2026 00:52:54 +0000 (0:00:00.223) 0:08:09.638 ***** 2026-02-05 00:55:30.594377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.594381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.594385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.594388 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594392 | orchestrator | 2026-02-05 00:55:30.594396 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-05 00:55:30.594400 | orchestrator | Thursday 05 February 2026 00:52:55 +0000 (0:00:00.379) 0:08:10.017 ***** 2026-02-05 00:55:30.594403 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594407 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.594411 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.594414 | orchestrator | 2026-02-05 00:55:30.594418 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-05 00:55:30.594422 | orchestrator | Thursday 05 February 2026 00:52:55 +0000 (0:00:00.307) 0:08:10.325 ***** 2026-02-05 00:55:30.594425 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594429 | orchestrator | 2026-02-05 00:55:30.594436 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-05 00:55:30.594440 | orchestrator | Thursday 05 February 2026 00:52:55 +0000 (0:00:00.207) 0:08:10.532 ***** 2026-02-05 00:55:30.594444 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594448 | orchestrator | 2026-02-05 00:55:30.594451 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-05 00:55:30.594455 | orchestrator | 2026-02-05 00:55:30.594459 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:55:30.594463 | orchestrator | Thursday 05 February 2026 00:52:56 +0000 (0:00:00.722) 0:08:11.254 ***** 2026-02-05 00:55:30.594467 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.594472 | orchestrator | 2026-02-05 00:55:30.594476 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:55:30.594480 | orchestrator | Thursday 05 February 2026 00:52:57 +0000 (0:00:00.978) 0:08:12.233 ***** 2026-02-05 00:55:30.594483 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.594487 | orchestrator | 2026-02-05 00:55:30.594491 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:55:30.594495 | orchestrator | Thursday 05 February 2026 00:52:58 +0000 (0:00:01.018) 0:08:13.251 ***** 2026-02-05 00:55:30.594499 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594502 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.594506 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.594514 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.594517 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.594521 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.594525 | orchestrator | 2026-02-05 00:55:30.594530 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:55:30.594537 | orchestrator | Thursday 05 February 2026 00:52:59 +0000 (0:00:00.984) 0:08:14.235 ***** 2026-02-05 00:55:30.594543 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.594549 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.594555 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.594561 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.594567 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.594572 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.594578 | orchestrator | 2026-02-05 00:55:30.594584 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:55:30.594590 | orchestrator | Thursday 05 February 2026 00:52:59 +0000 (0:00:00.720) 0:08:14.955 ***** 2026-02-05 00:55:30.594597 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.594603 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.594609 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.594615 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.594622 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.594628 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.594634 | orchestrator | 2026-02-05 00:55:30.594641 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:55:30.594663 | orchestrator | Thursday 05 February 2026 00:53:00 +0000 (0:00:00.666) 0:08:15.621 ***** 2026-02-05 00:55:30.594669 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.594676 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.594682 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.594688 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.594695 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.594701 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.594707 | orchestrator | 2026-02-05 00:55:30.594714 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:55:30.594720 | orchestrator | Thursday 05 February 2026 00:53:01 +0000 (0:00:00.691) 0:08:16.312 ***** 2026-02-05 00:55:30.594727 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594733 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.594739 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.594746 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.594752 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.594759 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.594764 | orchestrator | 2026-02-05 00:55:30.594768 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:55:30.594771 | orchestrator | Thursday 05 February 2026 00:53:02 +0000 (0:00:00.895) 0:08:17.208 ***** 2026-02-05 00:55:30.594775 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594779 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.594789 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.594796 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.594802 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.594808 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.594814 | orchestrator | 2026-02-05 00:55:30.594819 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:55:30.594825 | orchestrator | Thursday 05 February 2026 00:53:02 +0000 (0:00:00.657) 0:08:17.865 ***** 2026-02-05 00:55:30.594831 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.594836 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.594842 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.594847 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.594853 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.594859 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.594865 | orchestrator | 2026-02-05 00:55:30.594871 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:55:30.594937 | orchestrator | Thursday 05 February 2026 00:53:03 +0000 (0:00:00.498) 0:08:18.364 ***** 2026-02-05 00:55:30.594942 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.594946 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.594950 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.594953 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.594957 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.594961 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.594965 | orchestrator | 2026-02-05 00:55:30.594969 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:55:30.594980 | orchestrator | Thursday 05 February 2026 00:53:04 +0000 (0:00:01.123) 0:08:19.488 ***** 2026-02-05 00:55:30.594984 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.594988 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.594991 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.594995 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.594999 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.595002 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.595006 | orchestrator | 2026-02-05 00:55:30.595010 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:55:30.595014 | orchestrator | Thursday 05 February 2026 00:53:05 +0000 (0:00:00.952) 0:08:20.440 ***** 2026-02-05 00:55:30.595018 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.595021 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.595025 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.595029 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.595033 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.595036 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.595040 | orchestrator | 2026-02-05 00:55:30.595044 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:55:30.595048 | orchestrator | Thursday 05 February 2026 00:53:06 +0000 (0:00:00.706) 0:08:21.147 ***** 2026-02-05 00:55:30.595051 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.595055 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.595059 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.595063 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.595066 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.595070 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.595074 | orchestrator | 2026-02-05 00:55:30.595078 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:55:30.595081 | orchestrator | Thursday 05 February 2026 00:53:06 +0000 (0:00:00.515) 0:08:21.663 ***** 2026-02-05 00:55:30.595085 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.595089 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.595093 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.595096 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.595100 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.595104 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.595108 | orchestrator | 2026-02-05 00:55:30.595111 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:55:30.595115 | orchestrator | Thursday 05 February 2026 00:53:07 +0000 (0:00:00.640) 0:08:22.303 ***** 2026-02-05 00:55:30.595119 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.595123 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.595126 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.595130 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.595134 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.595138 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.595141 | orchestrator | 2026-02-05 00:55:30.595145 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:55:30.595149 | orchestrator | Thursday 05 February 2026 00:53:07 +0000 (0:00:00.510) 0:08:22.813 ***** 2026-02-05 00:55:30.595153 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.595156 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.595164 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.595167 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.595171 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.595175 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.595179 | orchestrator | 2026-02-05 00:55:30.595182 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:55:30.595186 | orchestrator | Thursday 05 February 2026 00:53:08 +0000 (0:00:00.650) 0:08:23.464 ***** 2026-02-05 00:55:30.595190 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.595194 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.595197 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.595201 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.595205 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.595208 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.595212 | orchestrator | 2026-02-05 00:55:30.595216 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:55:30.595220 | orchestrator | Thursday 05 February 2026 00:53:09 +0000 (0:00:00.546) 0:08:24.010 ***** 2026-02-05 00:55:30.595223 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.595227 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.595231 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.595235 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:55:30.595238 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:55:30.595242 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:55:30.595246 | orchestrator | 2026-02-05 00:55:30.595249 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:55:30.595253 | orchestrator | Thursday 05 February 2026 00:53:09 +0000 (0:00:00.675) 0:08:24.686 ***** 2026-02-05 00:55:30.595262 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.595266 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.595269 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.595273 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.595277 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.595281 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.595284 | orchestrator | 2026-02-05 00:55:30.595288 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:55:30.595292 | orchestrator | Thursday 05 February 2026 00:53:10 +0000 (0:00:00.630) 0:08:25.317 ***** 2026-02-05 00:55:30.595296 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.595299 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.595303 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.595307 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.595311 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.595314 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.595318 | orchestrator | 2026-02-05 00:55:30.595322 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:55:30.595325 | orchestrator | Thursday 05 February 2026 00:53:11 +0000 (0:00:00.743) 0:08:26.060 ***** 2026-02-05 00:55:30.595329 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.595333 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.595336 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.595340 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.595344 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.595348 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.595351 | orchestrator | 2026-02-05 00:55:30.595355 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-05 00:55:30.595362 | orchestrator | Thursday 05 February 2026 00:53:11 +0000 (0:00:00.914) 0:08:26.974 ***** 2026-02-05 00:55:30.595366 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:55:30.595370 | orchestrator | 2026-02-05 00:55:30.595374 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-05 00:55:30.595377 | orchestrator | Thursday 05 February 2026 00:53:15 +0000 (0:00:03.868) 0:08:30.843 ***** 2026-02-05 00:55:30.595381 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:55:30.595389 | orchestrator | 2026-02-05 00:55:30.595392 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-05 00:55:30.595396 | orchestrator | Thursday 05 February 2026 00:53:17 +0000 (0:00:01.835) 0:08:32.678 ***** 2026-02-05 00:55:30.595400 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.595404 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.595407 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.595411 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.595415 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.595419 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.595422 | orchestrator | 2026-02-05 00:55:30.595426 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-05 00:55:30.595430 | orchestrator | Thursday 05 February 2026 00:53:19 +0000 (0:00:01.536) 0:08:34.215 ***** 2026-02-05 00:55:30.595434 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.595437 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.595441 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.595445 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.595448 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.595452 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.595456 | orchestrator | 2026-02-05 00:55:30.595460 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-05 00:55:30.595463 | orchestrator | Thursday 05 February 2026 00:53:20 +0000 (0:00:01.387) 0:08:35.602 ***** 2026-02-05 00:55:30.595467 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:55:30.595473 | orchestrator | 2026-02-05 00:55:30.595476 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-05 00:55:30.595480 | orchestrator | Thursday 05 February 2026 00:53:21 +0000 (0:00:01.030) 0:08:36.633 ***** 2026-02-05 00:55:30.595484 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.595488 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.595491 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.595495 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.595499 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.595503 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.595506 | orchestrator | 2026-02-05 00:55:30.595510 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-05 00:55:30.595514 | orchestrator | Thursday 05 February 2026 00:53:23 +0000 (0:00:01.411) 0:08:38.045 ***** 2026-02-05 00:55:30.595517 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.595521 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.595525 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.595529 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.595532 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.595536 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.595540 | orchestrator | 2026-02-05 00:55:30.595543 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-05 00:55:30.595547 | orchestrator | Thursday 05 February 2026 00:53:26 +0000 (0:00:03.520) 0:08:41.565 ***** 2026-02-05 00:55:30.595551 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-1, testbed-node-2, testbed-node-0 2026-02-05 00:55:30.595555 | orchestrator | 2026-02-05 00:55:30.595559 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-05 00:55:30.595563 | orchestrator | Thursday 05 February 2026 00:53:27 +0000 (0:00:01.110) 0:08:42.675 ***** 2026-02-05 00:55:30.595566 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.595570 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.595574 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.595578 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.595581 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.595585 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.595593 | orchestrator | 2026-02-05 00:55:30.595597 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-05 00:55:30.595601 | orchestrator | Thursday 05 February 2026 00:53:28 +0000 (0:00:00.835) 0:08:43.511 ***** 2026-02-05 00:55:30.595604 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.595611 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.595615 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.595618 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:55:30.595622 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:55:30.595626 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:55:30.595629 | orchestrator | 2026-02-05 00:55:30.595633 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-05 00:55:30.595637 | orchestrator | Thursday 05 February 2026 00:53:31 +0000 (0:00:03.008) 0:08:46.519 ***** 2026-02-05 00:55:30.595641 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.595677 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.595683 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.595687 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:55:30.595690 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:55:30.595694 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:55:30.595698 | orchestrator | 2026-02-05 00:55:30.595702 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-05 00:55:30.595705 | orchestrator | 2026-02-05 00:55:30.595709 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:55:30.595713 | orchestrator | Thursday 05 February 2026 00:53:32 +0000 (0:00:01.056) 0:08:47.575 ***** 2026-02-05 00:55:30.595717 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-02-05 00:55:30.595721 | orchestrator | 2026-02-05 00:55:30.595724 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:55:30.595731 | orchestrator | Thursday 05 February 2026 00:53:33 +0000 (0:00:00.869) 0:08:48.444 ***** 2026-02-05 00:55:30.595735 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.595739 | orchestrator | 2026-02-05 00:55:30.595743 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:55:30.595747 | orchestrator | Thursday 05 February 2026 00:53:34 +0000 (0:00:00.724) 0:08:49.169 ***** 2026-02-05 00:55:30.595750 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.595754 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.595758 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.595762 | orchestrator | 2026-02-05 00:55:30.595765 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:55:30.595769 | orchestrator | Thursday 05 February 2026 00:53:34 +0000 (0:00:00.319) 0:08:49.489 ***** 2026-02-05 00:55:30.595773 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.595777 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.595780 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.595784 | orchestrator | 2026-02-05 00:55:30.595788 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:55:30.595792 | orchestrator | Thursday 05 February 2026 00:53:35 +0000 (0:00:01.041) 0:08:50.531 ***** 2026-02-05 00:55:30.595795 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.595799 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.595803 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.595807 | orchestrator | 2026-02-05 00:55:30.595810 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:55:30.595814 | orchestrator | Thursday 05 February 2026 00:53:36 +0000 (0:00:00.802) 0:08:51.333 ***** 2026-02-05 00:55:30.595818 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.595822 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.595825 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.595829 | orchestrator | 2026-02-05 00:55:30.595833 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:55:30.595840 | orchestrator | Thursday 05 February 2026 00:53:37 +0000 (0:00:00.787) 0:08:52.121 ***** 2026-02-05 00:55:30.595844 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.595848 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.595852 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.595856 | orchestrator | 2026-02-05 00:55:30.595859 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:55:30.595863 | orchestrator | Thursday 05 February 2026 00:53:37 +0000 (0:00:00.307) 0:08:52.429 ***** 2026-02-05 00:55:30.595867 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.595871 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.595874 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.595878 | orchestrator | 2026-02-05 00:55:30.595882 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:55:30.595886 | orchestrator | Thursday 05 February 2026 00:53:37 +0000 (0:00:00.508) 0:08:52.937 ***** 2026-02-05 00:55:30.595889 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.595893 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.595897 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.595901 | orchestrator | 2026-02-05 00:55:30.595904 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:55:30.595908 | orchestrator | Thursday 05 February 2026 00:53:38 +0000 (0:00:00.304) 0:08:53.242 ***** 2026-02-05 00:55:30.595912 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.595916 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.595919 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.595923 | orchestrator | 2026-02-05 00:55:30.595927 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:55:30.595931 | orchestrator | Thursday 05 February 2026 00:53:38 +0000 (0:00:00.668) 0:08:53.910 ***** 2026-02-05 00:55:30.595934 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.595938 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.595942 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.595945 | orchestrator | 2026-02-05 00:55:30.595949 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:55:30.595953 | orchestrator | Thursday 05 February 2026 00:53:39 +0000 (0:00:00.644) 0:08:54.554 ***** 2026-02-05 00:55:30.595957 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.595960 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.595964 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.595968 | orchestrator | 2026-02-05 00:55:30.595972 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:55:30.595975 | orchestrator | Thursday 05 February 2026 00:53:40 +0000 (0:00:00.461) 0:08:55.016 ***** 2026-02-05 00:55:30.595979 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.595986 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.595990 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.595993 | orchestrator | 2026-02-05 00:55:30.595997 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:55:30.596001 | orchestrator | Thursday 05 February 2026 00:53:40 +0000 (0:00:00.293) 0:08:55.309 ***** 2026-02-05 00:55:30.596005 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.596008 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.596012 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.596016 | orchestrator | 2026-02-05 00:55:30.596020 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:55:30.596023 | orchestrator | Thursday 05 February 2026 00:53:40 +0000 (0:00:00.281) 0:08:55.591 ***** 2026-02-05 00:55:30.596027 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.596031 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.596035 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.596038 | orchestrator | 2026-02-05 00:55:30.596042 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:55:30.596086 | orchestrator | Thursday 05 February 2026 00:53:40 +0000 (0:00:00.269) 0:08:55.860 ***** 2026-02-05 00:55:30.596095 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.596099 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.596103 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.596107 | orchestrator | 2026-02-05 00:55:30.596110 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:55:30.596114 | orchestrator | Thursday 05 February 2026 00:53:41 +0000 (0:00:00.465) 0:08:56.326 ***** 2026-02-05 00:55:30.596121 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.596125 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.596129 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.596133 | orchestrator | 2026-02-05 00:55:30.596136 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:55:30.596140 | orchestrator | Thursday 05 February 2026 00:53:41 +0000 (0:00:00.265) 0:08:56.591 ***** 2026-02-05 00:55:30.596144 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.596148 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.596151 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.596155 | orchestrator | 2026-02-05 00:55:30.596159 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:55:30.596163 | orchestrator | Thursday 05 February 2026 00:53:41 +0000 (0:00:00.243) 0:08:56.834 ***** 2026-02-05 00:55:30.596166 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.596170 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.596174 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.596178 | orchestrator | 2026-02-05 00:55:30.596181 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:55:30.596185 | orchestrator | Thursday 05 February 2026 00:53:42 +0000 (0:00:00.251) 0:08:57.086 ***** 2026-02-05 00:55:30.596189 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.596193 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.596196 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.596200 | orchestrator | 2026-02-05 00:55:30.596204 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:55:30.596208 | orchestrator | Thursday 05 February 2026 00:53:42 +0000 (0:00:00.545) 0:08:57.631 ***** 2026-02-05 00:55:30.596211 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.596215 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.596219 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.596223 | orchestrator | 2026-02-05 00:55:30.596227 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-05 00:55:30.596230 | orchestrator | Thursday 05 February 2026 00:53:43 +0000 (0:00:00.518) 0:08:58.149 ***** 2026-02-05 00:55:30.596234 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.596238 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.596242 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-05 00:55:30.596246 | orchestrator | 2026-02-05 00:55:30.596250 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-05 00:55:30.596256 | orchestrator | Thursday 05 February 2026 00:53:43 +0000 (0:00:00.389) 0:08:58.539 ***** 2026-02-05 00:55:30.596261 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:55:30.596267 | orchestrator | 2026-02-05 00:55:30.596274 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-05 00:55:30.596280 | orchestrator | Thursday 05 February 2026 00:53:45 +0000 (0:00:02.390) 0:09:00.929 ***** 2026-02-05 00:55:30.596288 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-05 00:55:30.596296 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.596302 | orchestrator | 2026-02-05 00:55:30.596308 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-05 00:55:30.596315 | orchestrator | Thursday 05 February 2026 00:53:46 +0000 (0:00:00.211) 0:09:01.141 ***** 2026-02-05 00:55:30.596331 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:55:30.596344 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:55:30.596350 | orchestrator | 2026-02-05 00:55:30.596356 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-05 00:55:30.596363 | orchestrator | Thursday 05 February 2026 00:53:54 +0000 (0:00:08.101) 0:09:09.242 ***** 2026-02-05 00:55:30.596375 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:55:30.596381 | orchestrator | 2026-02-05 00:55:30.596388 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-05 00:55:30.596393 | orchestrator | Thursday 05 February 2026 00:53:57 +0000 (0:00:03.753) 0:09:12.995 ***** 2026-02-05 00:55:30.596399 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.596406 | orchestrator | 2026-02-05 00:55:30.596413 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-05 00:55:30.596419 | orchestrator | Thursday 05 February 2026 00:53:58 +0000 (0:00:00.511) 0:09:13.507 ***** 2026-02-05 00:55:30.596425 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-05 00:55:30.596431 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-05 00:55:30.596437 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-05 00:55:30.596443 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-05 00:55:30.596449 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-05 00:55:30.596455 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-05 00:55:30.596461 | orchestrator | 2026-02-05 00:55:30.596471 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-05 00:55:30.596478 | orchestrator | Thursday 05 February 2026 00:53:59 +0000 (0:00:01.393) 0:09:14.900 ***** 2026-02-05 00:55:30.596484 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:55:30.596491 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 00:55:30.596498 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:55:30.596502 | orchestrator | 2026-02-05 00:55:30.596505 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-05 00:55:30.596509 | orchestrator | Thursday 05 February 2026 00:54:02 +0000 (0:00:02.154) 0:09:17.054 ***** 2026-02-05 00:55:30.596513 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 00:55:30.596517 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 00:55:30.596521 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.596525 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 00:55:30.596528 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 00:55:30.596532 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.596536 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 00:55:30.596539 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 00:55:30.596543 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.596547 | orchestrator | 2026-02-05 00:55:30.596551 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-05 00:55:30.596554 | orchestrator | Thursday 05 February 2026 00:54:03 +0000 (0:00:01.276) 0:09:18.331 ***** 2026-02-05 00:55:30.596558 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.596567 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.596570 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.596574 | orchestrator | 2026-02-05 00:55:30.596578 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-05 00:55:30.596582 | orchestrator | Thursday 05 February 2026 00:54:05 +0000 (0:00:02.657) 0:09:20.989 ***** 2026-02-05 00:55:30.596585 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.596589 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.596593 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.596597 | orchestrator | 2026-02-05 00:55:30.596600 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-05 00:55:30.596604 | orchestrator | Thursday 05 February 2026 00:54:06 +0000 (0:00:00.334) 0:09:21.324 ***** 2026-02-05 00:55:30.596608 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.596612 | orchestrator | 2026-02-05 00:55:30.596615 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-05 00:55:30.596619 | orchestrator | Thursday 05 February 2026 00:54:07 +0000 (0:00:00.825) 0:09:22.149 ***** 2026-02-05 00:55:30.596623 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.596627 | orchestrator | 2026-02-05 00:55:30.596630 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-05 00:55:30.596634 | orchestrator | Thursday 05 February 2026 00:54:07 +0000 (0:00:00.528) 0:09:22.678 ***** 2026-02-05 00:55:30.596638 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.596642 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.596667 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.596674 | orchestrator | 2026-02-05 00:55:30.596680 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-05 00:55:30.596686 | orchestrator | Thursday 05 February 2026 00:54:09 +0000 (0:00:01.624) 0:09:24.303 ***** 2026-02-05 00:55:30.596691 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.596697 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.596703 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.596709 | orchestrator | 2026-02-05 00:55:30.596715 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-05 00:55:30.596721 | orchestrator | Thursday 05 February 2026 00:54:10 +0000 (0:00:01.169) 0:09:25.472 ***** 2026-02-05 00:55:30.596727 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.596733 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.596740 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.596744 | orchestrator | 2026-02-05 00:55:30.596748 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-05 00:55:30.596752 | orchestrator | Thursday 05 February 2026 00:54:12 +0000 (0:00:01.950) 0:09:27.423 ***** 2026-02-05 00:55:30.596755 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.596764 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.596768 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.596772 | orchestrator | 2026-02-05 00:55:30.596775 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-05 00:55:30.596779 | orchestrator | Thursday 05 February 2026 00:54:14 +0000 (0:00:02.055) 0:09:29.478 ***** 2026-02-05 00:55:30.596783 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.596787 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.596791 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.596794 | orchestrator | 2026-02-05 00:55:30.596798 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 00:55:30.596802 | orchestrator | Thursday 05 February 2026 00:54:15 +0000 (0:00:01.474) 0:09:30.953 ***** 2026-02-05 00:55:30.596805 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.596809 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.596813 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.596817 | orchestrator | 2026-02-05 00:55:30.596820 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-05 00:55:30.596829 | orchestrator | Thursday 05 February 2026 00:54:16 +0000 (0:00:00.706) 0:09:31.660 ***** 2026-02-05 00:55:30.596832 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.596836 | orchestrator | 2026-02-05 00:55:30.596840 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-05 00:55:30.596844 | orchestrator | Thursday 05 February 2026 00:54:17 +0000 (0:00:00.794) 0:09:32.454 ***** 2026-02-05 00:55:30.596850 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.596854 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.596858 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.596862 | orchestrator | 2026-02-05 00:55:30.596865 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-05 00:55:30.596869 | orchestrator | Thursday 05 February 2026 00:54:17 +0000 (0:00:00.356) 0:09:32.811 ***** 2026-02-05 00:55:30.596873 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.596877 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.596880 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.596884 | orchestrator | 2026-02-05 00:55:30.596888 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-05 00:55:30.596891 | orchestrator | Thursday 05 February 2026 00:54:19 +0000 (0:00:01.249) 0:09:34.060 ***** 2026-02-05 00:55:30.596895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.596899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.596902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.596906 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.596910 | orchestrator | 2026-02-05 00:55:30.596914 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-05 00:55:30.596917 | orchestrator | Thursday 05 February 2026 00:54:19 +0000 (0:00:00.885) 0:09:34.946 ***** 2026-02-05 00:55:30.596921 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.596925 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.596929 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.596932 | orchestrator | 2026-02-05 00:55:30.596936 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-05 00:55:30.596940 | orchestrator | 2026-02-05 00:55:30.596943 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:55:30.596947 | orchestrator | Thursday 05 February 2026 00:54:20 +0000 (0:00:00.492) 0:09:35.439 ***** 2026-02-05 00:55:30.596951 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.596955 | orchestrator | 2026-02-05 00:55:30.596959 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:55:30.596962 | orchestrator | Thursday 05 February 2026 00:54:20 +0000 (0:00:00.438) 0:09:35.878 ***** 2026-02-05 00:55:30.596966 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.596970 | orchestrator | 2026-02-05 00:55:30.596974 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:55:30.596977 | orchestrator | Thursday 05 February 2026 00:54:21 +0000 (0:00:00.585) 0:09:36.464 ***** 2026-02-05 00:55:30.596981 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.596985 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.596988 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.596992 | orchestrator | 2026-02-05 00:55:30.596996 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:55:30.597000 | orchestrator | Thursday 05 February 2026 00:54:21 +0000 (0:00:00.276) 0:09:36.740 ***** 2026-02-05 00:55:30.597003 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.597007 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.597011 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.597018 | orchestrator | 2026-02-05 00:55:30.597021 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:55:30.597025 | orchestrator | Thursday 05 February 2026 00:54:22 +0000 (0:00:00.714) 0:09:37.454 ***** 2026-02-05 00:55:30.597029 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.597033 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.597036 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.597040 | orchestrator | 2026-02-05 00:55:30.597044 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:55:30.597048 | orchestrator | Thursday 05 February 2026 00:54:23 +0000 (0:00:00.917) 0:09:38.372 ***** 2026-02-05 00:55:30.597051 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.597055 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.597059 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.597062 | orchestrator | 2026-02-05 00:55:30.597066 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:55:30.597070 | orchestrator | Thursday 05 February 2026 00:54:24 +0000 (0:00:00.680) 0:09:39.053 ***** 2026-02-05 00:55:30.597073 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.597077 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.597081 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.597085 | orchestrator | 2026-02-05 00:55:30.597091 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:55:30.597095 | orchestrator | Thursday 05 February 2026 00:54:24 +0000 (0:00:00.250) 0:09:39.303 ***** 2026-02-05 00:55:30.597098 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.597102 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.597106 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.597110 | orchestrator | 2026-02-05 00:55:30.597113 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:55:30.597117 | orchestrator | Thursday 05 February 2026 00:54:24 +0000 (0:00:00.255) 0:09:39.559 ***** 2026-02-05 00:55:30.597121 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.597124 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.597128 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.597132 | orchestrator | 2026-02-05 00:55:30.597135 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:55:30.597139 | orchestrator | Thursday 05 February 2026 00:54:24 +0000 (0:00:00.440) 0:09:39.999 ***** 2026-02-05 00:55:30.597143 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.597147 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.597151 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.597154 | orchestrator | 2026-02-05 00:55:30.597158 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:55:30.597162 | orchestrator | Thursday 05 February 2026 00:54:25 +0000 (0:00:00.706) 0:09:40.706 ***** 2026-02-05 00:55:30.597165 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.597169 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.597173 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.597176 | orchestrator | 2026-02-05 00:55:30.597186 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:55:30.597190 | orchestrator | Thursday 05 February 2026 00:54:26 +0000 (0:00:00.676) 0:09:41.382 ***** 2026-02-05 00:55:30.597193 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.597197 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.597201 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.597205 | orchestrator | 2026-02-05 00:55:30.597208 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:55:30.597212 | orchestrator | Thursday 05 February 2026 00:54:26 +0000 (0:00:00.288) 0:09:41.670 ***** 2026-02-05 00:55:30.597216 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.597219 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.597223 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.597227 | orchestrator | 2026-02-05 00:55:30.597231 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:55:30.597238 | orchestrator | Thursday 05 February 2026 00:54:27 +0000 (0:00:00.470) 0:09:42.140 ***** 2026-02-05 00:55:30.597241 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.597245 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.597249 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.597253 | orchestrator | 2026-02-05 00:55:30.597257 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:55:30.597260 | orchestrator | Thursday 05 February 2026 00:54:27 +0000 (0:00:00.323) 0:09:42.464 ***** 2026-02-05 00:55:30.597264 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.597268 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.597271 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.597275 | orchestrator | 2026-02-05 00:55:30.597279 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:55:30.597282 | orchestrator | Thursday 05 February 2026 00:54:27 +0000 (0:00:00.343) 0:09:42.807 ***** 2026-02-05 00:55:30.597286 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.597290 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.597294 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.597297 | orchestrator | 2026-02-05 00:55:30.597301 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:55:30.597305 | orchestrator | Thursday 05 February 2026 00:54:28 +0000 (0:00:00.346) 0:09:43.154 ***** 2026-02-05 00:55:30.597308 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.597312 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.597316 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.597320 | orchestrator | 2026-02-05 00:55:30.597323 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:55:30.597327 | orchestrator | Thursday 05 February 2026 00:54:28 +0000 (0:00:00.621) 0:09:43.775 ***** 2026-02-05 00:55:30.597331 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.597334 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.597338 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.597342 | orchestrator | 2026-02-05 00:55:30.597346 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:55:30.597349 | orchestrator | Thursday 05 February 2026 00:54:29 +0000 (0:00:00.286) 0:09:44.061 ***** 2026-02-05 00:55:30.597353 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.597357 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.597361 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.597364 | orchestrator | 2026-02-05 00:55:30.597368 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:55:30.597372 | orchestrator | Thursday 05 February 2026 00:54:29 +0000 (0:00:00.259) 0:09:44.321 ***** 2026-02-05 00:55:30.597376 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.597379 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.597383 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.597387 | orchestrator | 2026-02-05 00:55:30.597390 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:55:30.597394 | orchestrator | Thursday 05 February 2026 00:54:29 +0000 (0:00:00.288) 0:09:44.610 ***** 2026-02-05 00:55:30.597398 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.597402 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.597405 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.597409 | orchestrator | 2026-02-05 00:55:30.597413 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-05 00:55:30.597417 | orchestrator | Thursday 05 February 2026 00:54:30 +0000 (0:00:00.671) 0:09:45.282 ***** 2026-02-05 00:55:30.597420 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.597424 | orchestrator | 2026-02-05 00:55:30.597428 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-05 00:55:30.597434 | orchestrator | Thursday 05 February 2026 00:54:30 +0000 (0:00:00.463) 0:09:45.745 ***** 2026-02-05 00:55:30.597438 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:55:30.597445 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 00:55:30.597449 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:55:30.597453 | orchestrator | 2026-02-05 00:55:30.597456 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-05 00:55:30.597460 | orchestrator | Thursday 05 February 2026 00:54:33 +0000 (0:00:02.751) 0:09:48.497 ***** 2026-02-05 00:55:30.597464 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 00:55:30.597467 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 00:55:30.597471 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.597475 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 00:55:30.597478 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 00:55:30.597482 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.597486 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 00:55:30.597490 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 00:55:30.597493 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.597497 | orchestrator | 2026-02-05 00:55:30.597501 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-05 00:55:30.597504 | orchestrator | Thursday 05 February 2026 00:54:34 +0000 (0:00:01.241) 0:09:49.738 ***** 2026-02-05 00:55:30.597511 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.597515 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.597518 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.597522 | orchestrator | 2026-02-05 00:55:30.597526 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-05 00:55:30.597529 | orchestrator | Thursday 05 February 2026 00:54:35 +0000 (0:00:00.349) 0:09:50.088 ***** 2026-02-05 00:55:30.597533 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.597537 | orchestrator | 2026-02-05 00:55:30.597541 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-05 00:55:30.597544 | orchestrator | Thursday 05 February 2026 00:54:35 +0000 (0:00:00.757) 0:09:50.845 ***** 2026-02-05 00:55:30.597548 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.597553 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.597557 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.597561 | orchestrator | 2026-02-05 00:55:30.597565 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-05 00:55:30.597569 | orchestrator | Thursday 05 February 2026 00:54:36 +0000 (0:00:00.820) 0:09:51.665 ***** 2026-02-05 00:55:30.597572 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:55:30.597576 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-05 00:55:30.597580 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:55:30.597583 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-05 00:55:30.597587 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:55:30.597591 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-05 00:55:30.597595 | orchestrator | 2026-02-05 00:55:30.597598 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-05 00:55:30.597607 | orchestrator | Thursday 05 February 2026 00:54:41 +0000 (0:00:04.688) 0:09:56.353 ***** 2026-02-05 00:55:30.597611 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:55:30.597615 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:55:30.597618 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:55:30.597622 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:55:30.597626 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:55:30.597629 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:55:30.597633 | orchestrator | 2026-02-05 00:55:30.597639 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-05 00:55:30.597660 | orchestrator | Thursday 05 February 2026 00:54:43 +0000 (0:00:02.294) 0:09:58.647 ***** 2026-02-05 00:55:30.597667 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 00:55:30.597673 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.597680 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 00:55:30.597686 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.597692 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 00:55:30.597697 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.597704 | orchestrator | 2026-02-05 00:55:30.597710 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-05 00:55:30.597720 | orchestrator | Thursday 05 February 2026 00:54:45 +0000 (0:00:01.608) 0:10:00.255 ***** 2026-02-05 00:55:30.597727 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-05 00:55:30.597733 | orchestrator | 2026-02-05 00:55:30.597739 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-05 00:55:30.597745 | orchestrator | Thursday 05 February 2026 00:54:45 +0000 (0:00:00.236) 0:10:00.492 ***** 2026-02-05 00:55:30.597751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:55:30.597759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:55:30.597763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:55:30.597767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:55:30.597771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:55:30.597775 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.597778 | orchestrator | 2026-02-05 00:55:30.597786 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-05 00:55:30.597790 | orchestrator | Thursday 05 February 2026 00:54:46 +0000 (0:00:00.637) 0:10:01.130 ***** 2026-02-05 00:55:30.597793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:55:30.597797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:55:30.597801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:55:30.597805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:55:30.597808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:55:30.597812 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.597820 | orchestrator | 2026-02-05 00:55:30.597823 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-05 00:55:30.597827 | orchestrator | Thursday 05 February 2026 00:54:46 +0000 (0:00:00.590) 0:10:01.721 ***** 2026-02-05 00:55:30.597831 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 00:55:30.597836 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 00:55:30.597842 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 00:55:30.597848 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 00:55:30.597853 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 00:55:30.597858 | orchestrator | 2026-02-05 00:55:30.597864 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-05 00:55:30.597870 | orchestrator | Thursday 05 February 2026 00:55:15 +0000 (0:00:29.257) 0:10:30.978 ***** 2026-02-05 00:55:30.597875 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.597881 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.597887 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.597892 | orchestrator | 2026-02-05 00:55:30.597898 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-05 00:55:30.597904 | orchestrator | Thursday 05 February 2026 00:55:16 +0000 (0:00:00.272) 0:10:31.251 ***** 2026-02-05 00:55:30.597910 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.597917 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.597923 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.597929 | orchestrator | 2026-02-05 00:55:30.597935 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-05 00:55:30.597940 | orchestrator | Thursday 05 February 2026 00:55:16 +0000 (0:00:00.276) 0:10:31.527 ***** 2026-02-05 00:55:30.597946 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.597952 | orchestrator | 2026-02-05 00:55:30.597956 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-05 00:55:30.597959 | orchestrator | Thursday 05 February 2026 00:55:17 +0000 (0:00:00.648) 0:10:32.175 ***** 2026-02-05 00:55:30.597963 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.597967 | orchestrator | 2026-02-05 00:55:30.597974 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-05 00:55:30.597978 | orchestrator | Thursday 05 February 2026 00:55:17 +0000 (0:00:00.526) 0:10:32.701 ***** 2026-02-05 00:55:30.597982 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.597986 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.597989 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.597993 | orchestrator | 2026-02-05 00:55:30.597997 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-05 00:55:30.598001 | orchestrator | Thursday 05 February 2026 00:55:19 +0000 (0:00:01.669) 0:10:34.371 ***** 2026-02-05 00:55:30.598004 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.598008 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.598043 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.598048 | orchestrator | 2026-02-05 00:55:30.598052 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-05 00:55:30.598056 | orchestrator | Thursday 05 February 2026 00:55:20 +0000 (0:00:01.281) 0:10:35.653 ***** 2026-02-05 00:55:30.598064 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:55:30.598068 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:55:30.598072 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:55:30.598076 | orchestrator | 2026-02-05 00:55:30.598079 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-05 00:55:30.598083 | orchestrator | Thursday 05 February 2026 00:55:22 +0000 (0:00:02.122) 0:10:37.775 ***** 2026-02-05 00:55:30.598090 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.598094 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.598097 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 00:55:30.598101 | orchestrator | 2026-02-05 00:55:30.598105 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 00:55:30.598109 | orchestrator | Thursday 05 February 2026 00:55:25 +0000 (0:00:02.411) 0:10:40.186 ***** 2026-02-05 00:55:30.598112 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.598116 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.598120 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.598124 | orchestrator | 2026-02-05 00:55:30.598127 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-05 00:55:30.598131 | orchestrator | Thursday 05 February 2026 00:55:25 +0000 (0:00:00.363) 0:10:40.550 ***** 2026-02-05 00:55:30.598135 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:55:30.598139 | orchestrator | 2026-02-05 00:55:30.598142 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-05 00:55:30.598146 | orchestrator | Thursday 05 February 2026 00:55:26 +0000 (0:00:00.742) 0:10:41.293 ***** 2026-02-05 00:55:30.598150 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.598154 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.598158 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.598161 | orchestrator | 2026-02-05 00:55:30.598165 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-05 00:55:30.598169 | orchestrator | Thursday 05 February 2026 00:55:26 +0000 (0:00:00.332) 0:10:41.626 ***** 2026-02-05 00:55:30.598173 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.598176 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:55:30.598180 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:55:30.598184 | orchestrator | 2026-02-05 00:55:30.598188 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-05 00:55:30.598191 | orchestrator | Thursday 05 February 2026 00:55:26 +0000 (0:00:00.322) 0:10:41.949 ***** 2026-02-05 00:55:30.598195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:55:30.598199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:55:30.598203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:55:30.598206 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:55:30.598210 | orchestrator | 2026-02-05 00:55:30.598214 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-05 00:55:30.598217 | orchestrator | Thursday 05 February 2026 00:55:27 +0000 (0:00:00.835) 0:10:42.784 ***** 2026-02-05 00:55:30.598221 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:55:30.598225 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:55:30.598229 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:55:30.598232 | orchestrator | 2026-02-05 00:55:30.598236 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:55:30.598240 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-05 00:55:30.598245 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-05 00:55:30.598252 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-05 00:55:30.598256 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-05 00:55:30.598260 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-05 00:55:30.598266 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-05 00:55:30.598270 | orchestrator | 2026-02-05 00:55:30.598274 | orchestrator | 2026-02-05 00:55:30.598278 | orchestrator | 2026-02-05 00:55:30.598282 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:55:30.598285 | orchestrator | Thursday 05 February 2026 00:55:28 +0000 (0:00:00.228) 0:10:43.012 ***** 2026-02-05 00:55:30.598289 | orchestrator | =============================================================================== 2026-02-05 00:55:30.598293 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 54.43s 2026-02-05 00:55:30.598297 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.01s 2026-02-05 00:55:30.598300 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.43s 2026-02-05 00:55:30.598304 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.26s 2026-02-05 00:55:30.598308 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.05s 2026-02-05 00:55:30.598312 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.20s 2026-02-05 00:55:30.598315 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.56s 2026-02-05 00:55:30.598319 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.29s 2026-02-05 00:55:30.598323 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.16s 2026-02-05 00:55:30.598330 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.10s 2026-02-05 00:55:30.598333 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.72s 2026-02-05 00:55:30.598337 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.35s 2026-02-05 00:55:30.598341 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.75s 2026-02-05 00:55:30.598344 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.69s 2026-02-05 00:55:30.598348 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.07s 2026-02-05 00:55:30.598352 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.87s 2026-02-05 00:55:30.598356 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.75s 2026-02-05 00:55:30.598359 | orchestrator | ceph-mon : Ceph monitor mkfs with keyring ------------------------------- 3.66s 2026-02-05 00:55:30.598363 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.52s 2026-02-05 00:55:30.598367 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.32s 2026-02-05 00:55:30.598371 | orchestrator | 2026-02-05 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:33.627161 | orchestrator | 2026-02-05 00:55:33 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:33.628443 | orchestrator | 2026-02-05 00:55:33 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:33.629540 | orchestrator | 2026-02-05 00:55:33 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:55:33.629600 | orchestrator | 2026-02-05 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:36.672088 | orchestrator | 2026-02-05 00:55:36 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:36.673330 | orchestrator | 2026-02-05 00:55:36 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:36.675177 | orchestrator | 2026-02-05 00:55:36 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:55:36.675223 | orchestrator | 2026-02-05 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:39.722797 | orchestrator | 2026-02-05 00:55:39 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:39.723969 | orchestrator | 2026-02-05 00:55:39 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:39.725101 | orchestrator | 2026-02-05 00:55:39 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:55:39.725143 | orchestrator | 2026-02-05 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:42.770559 | orchestrator | 2026-02-05 00:55:42 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:42.772940 | orchestrator | 2026-02-05 00:55:42 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:42.774864 | orchestrator | 2026-02-05 00:55:42 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:55:42.774920 | orchestrator | 2026-02-05 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:45.816738 | orchestrator | 2026-02-05 00:55:45 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:45.818690 | orchestrator | 2026-02-05 00:55:45 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:45.820348 | orchestrator | 2026-02-05 00:55:45 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:55:45.820377 | orchestrator | 2026-02-05 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:48.865627 | orchestrator | 2026-02-05 00:55:48 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:48.866511 | orchestrator | 2026-02-05 00:55:48 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:48.866763 | orchestrator | 2026-02-05 00:55:48 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:55:48.866930 | orchestrator | 2026-02-05 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:51.906074 | orchestrator | 2026-02-05 00:55:51 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:51.906334 | orchestrator | 2026-02-05 00:55:51 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:51.907107 | orchestrator | 2026-02-05 00:55:51 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:55:51.907461 | orchestrator | 2026-02-05 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:54.932355 | orchestrator | 2026-02-05 00:55:54 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:54.934847 | orchestrator | 2026-02-05 00:55:54 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:54.937125 | orchestrator | 2026-02-05 00:55:54 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:55:54.937200 | orchestrator | 2026-02-05 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:57.977406 | orchestrator | 2026-02-05 00:55:57 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state STARTED 2026-02-05 00:55:57.979514 | orchestrator | 2026-02-05 00:55:57 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:55:57.981111 | orchestrator | 2026-02-05 00:55:57 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:55:57.981146 | orchestrator | 2026-02-05 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:01.012804 | orchestrator | 2026-02-05 00:56:01 | INFO  | Task 77175e7a-3dd5-4786-a88e-7e39ca4f607b is in state SUCCESS 2026-02-05 00:56:01.013702 | orchestrator | 2026-02-05 00:56:01.013737 | orchestrator | 2026-02-05 00:56:01.013745 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:56:01.013752 | orchestrator | 2026-02-05 00:56:01.013759 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:56:01.013766 | orchestrator | Thursday 05 February 2026 00:53:03 +0000 (0:00:00.240) 0:00:00.241 ***** 2026-02-05 00:56:01.013772 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:01.013779 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:01.013785 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:01.013792 | orchestrator | 2026-02-05 00:56:01.013799 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:56:01.013805 | orchestrator | Thursday 05 February 2026 00:53:03 +0000 (0:00:00.252) 0:00:00.493 ***** 2026-02-05 00:56:01.013810 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-05 00:56:01.013815 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-05 00:56:01.013820 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-05 00:56:01.013827 | orchestrator | 2026-02-05 00:56:01.013834 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-05 00:56:01.013840 | orchestrator | 2026-02-05 00:56:01.013847 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 00:56:01.013853 | orchestrator | Thursday 05 February 2026 00:53:04 +0000 (0:00:00.376) 0:00:00.870 ***** 2026-02-05 00:56:01.013859 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:01.013865 | orchestrator | 2026-02-05 00:56:01.013872 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-05 00:56:01.013879 | orchestrator | Thursday 05 February 2026 00:53:04 +0000 (0:00:00.425) 0:00:01.295 ***** 2026-02-05 00:56:01.013885 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 00:56:01.013891 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 00:56:01.013898 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 00:56:01.013904 | orchestrator | 2026-02-05 00:56:01.013910 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-05 00:56:01.013916 | orchestrator | Thursday 05 February 2026 00:53:05 +0000 (0:00:00.664) 0:00:01.960 ***** 2026-02-05 00:56:01.013924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:56:01.013960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:56:01.013979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:56:01.013988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:56:01.014007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:56:01.014058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:56:01.014089 | orchestrator | 2026-02-05 00:56:01.014108 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 00:56:01.014115 | orchestrator | Thursday 05 February 2026 00:53:06 +0000 (0:00:01.470) 0:00:03.431 ***** 2026-02-05 00:56:01.014121 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:01.014128 | orchestrator | 2026-02-05 00:56:01.014134 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-05 00:56:01.014141 | orchestrator | Thursday 05 February 2026 00:53:07 +0000 (0:00:00.510) 0:00:03.941 ***** 2026-02-05 00:56:01.014154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:56:01.014162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:56:01.014169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:56:01.014185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:56:01.014198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:56:01.014206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:56:01.014213 | orchestrator | 2026-02-05 00:56:01.014219 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-05 00:56:01.014225 | orchestrator | Thursday 05 February 2026 00:53:09 +0000 (0:00:02.504) 0:00:06.446 ***** 2026-02-05 00:56:01.014232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:56:01.014245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:56:01.014252 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:01.014263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:56:01.014270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:56:01.014277 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:01.014283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:56:01.014296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:56:01.014303 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:01.014309 | orchestrator | 2026-02-05 00:56:01.014315 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-05 00:56:01.014322 | orchestrator | Thursday 05 February 2026 00:53:11 +0000 (0:00:01.369) 0:00:07.816 ***** 2026-02-05 00:56:01.014332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:56:01.014339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:56:01.014346 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:01.014352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:56:01.014364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:56:01.014371 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:01.014381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:56:01.014387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:56:01.014395 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:01.014401 | orchestrator | 2026-02-05 00:56:01.014408 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-05 00:56:01.014418 | orchestrator | Thursday 05 February 2026 00:53:12 +0000 (0:00:01.063) 0:00:08.880 ***** 2026-02-05 00:56:01.014425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:56:01.014431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:56:01.014436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:56:01.014443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:56:01.014447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:56:01.014457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:56:01.014461 | orchestrator | 2026-02-05 00:56:01.014466 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-05 00:56:01.014470 | orchestrator | Thursday 05 February 2026 00:53:14 +0000 (0:00:02.361) 0:00:11.241 ***** 2026-02-05 00:56:01.014474 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:01.014478 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:01.014482 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:01.014486 | orchestrator | 2026-02-05 00:56:01.014492 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-05 00:56:01.014498 | orchestrator | Thursday 05 February 2026 00:53:17 +0000 (0:00:02.466) 0:00:13.707 ***** 2026-02-05 00:56:01.014502 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:01.014506 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:01.014509 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:01.014513 | orchestrator | 2026-02-05 00:56:01.014517 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-05 00:56:01.014520 | orchestrator | Thursday 05 February 2026 00:53:18 +0000 (0:00:01.903) 0:00:15.611 ***** 2026-02-05 00:56:01.014528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:56:01.014533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:56:01.014539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:56:01.014546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:56:01.014553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:56:01.014557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:56:01.014566 | orchestrator | 2026-02-05 00:56:01.014570 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 00:56:01.014574 | orchestrator | Thursday 05 February 2026 00:53:21 +0000 (0:00:02.208) 0:00:17.820 ***** 2026-02-05 00:56:01.014577 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:01.014581 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:01.014585 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:01.014589 | orchestrator | 2026-02-05 00:56:01.014592 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-05 00:56:01.014596 | orchestrator | Thursday 05 February 2026 00:53:21 +0000 (0:00:00.243) 0:00:18.063 ***** 2026-02-05 00:56:01.014600 | orchestrator | 2026-02-05 00:56:01.014604 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-05 00:56:01.014607 | orchestrator | Thursday 05 February 2026 00:53:21 +0000 (0:00:00.057) 0:00:18.121 ***** 2026-02-05 00:56:01.014611 | orchestrator | 2026-02-05 00:56:01.014615 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-05 00:56:01.014619 | orchestrator | Thursday 05 February 2026 00:53:21 +0000 (0:00:00.066) 0:00:18.187 ***** 2026-02-05 00:56:01.014623 | orchestrator | 2026-02-05 00:56:01.014626 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-05 00:56:01.014630 | orchestrator | Thursday 05 February 2026 00:53:21 +0000 (0:00:00.075) 0:00:18.263 ***** 2026-02-05 00:56:01.014634 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:01.014637 | orchestrator | 2026-02-05 00:56:01.014641 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-05 00:56:01.014645 | orchestrator | Thursday 05 February 2026 00:53:21 +0000 (0:00:00.181) 0:00:18.444 ***** 2026-02-05 00:56:01.014649 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:01.014670 | orchestrator | 2026-02-05 00:56:01.014675 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-05 00:56:01.014678 | orchestrator | Thursday 05 February 2026 00:53:22 +0000 (0:00:00.480) 0:00:18.925 ***** 2026-02-05 00:56:01.014682 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:01.014686 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:01.014690 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:01.014694 | orchestrator | 2026-02-05 00:56:01.014697 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-05 00:56:01.014701 | orchestrator | Thursday 05 February 2026 00:54:27 +0000 (0:01:05.364) 0:01:24.290 ***** 2026-02-05 00:56:01.014705 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:01.014709 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:01.014713 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:01.014718 | orchestrator | 2026-02-05 00:56:01.014727 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 00:56:01.014734 | orchestrator | Thursday 05 February 2026 00:55:47 +0000 (0:01:20.206) 0:02:44.497 ***** 2026-02-05 00:56:01.014740 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:01.014747 | orchestrator | 2026-02-05 00:56:01.014754 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-05 00:56:01.014768 | orchestrator | Thursday 05 February 2026 00:55:48 +0000 (0:00:00.570) 0:02:45.068 ***** 2026-02-05 00:56:01.014773 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:01.014777 | orchestrator | 2026-02-05 00:56:01.014782 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-05 00:56:01.014788 | orchestrator | Thursday 05 February 2026 00:55:51 +0000 (0:00:03.266) 0:02:48.334 ***** 2026-02-05 00:56:01.014797 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:01.014804 | orchestrator | 2026-02-05 00:56:01.014810 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-05 00:56:01.014817 | orchestrator | Thursday 05 February 2026 00:55:54 +0000 (0:00:02.716) 0:02:51.050 ***** 2026-02-05 00:56:01.014823 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:01.014829 | orchestrator | 2026-02-05 00:56:01.014835 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-05 00:56:01.014841 | orchestrator | Thursday 05 February 2026 00:55:57 +0000 (0:00:02.894) 0:02:53.945 ***** 2026-02-05 00:56:01.014846 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:01.014852 | orchestrator | 2026-02-05 00:56:01.014862 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:56:01.014870 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 00:56:01.014877 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:56:01.014883 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:56:01.014889 | orchestrator | 2026-02-05 00:56:01.014896 | orchestrator | 2026-02-05 00:56:01.014903 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:56:01.014909 | orchestrator | Thursday 05 February 2026 00:55:59 +0000 (0:00:02.315) 0:02:56.261 ***** 2026-02-05 00:56:01.014915 | orchestrator | =============================================================================== 2026-02-05 00:56:01.014922 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 80.21s 2026-02-05 00:56:01.014928 | orchestrator | opensearch : Restart opensearch container ------------------------------ 65.36s 2026-02-05 00:56:01.014935 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.27s 2026-02-05 00:56:01.014941 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.89s 2026-02-05 00:56:01.014947 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.72s 2026-02-05 00:56:01.014954 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.50s 2026-02-05 00:56:01.014960 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.47s 2026-02-05 00:56:01.014966 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.36s 2026-02-05 00:56:01.014973 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.32s 2026-02-05 00:56:01.014980 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.21s 2026-02-05 00:56:01.014983 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.90s 2026-02-05 00:56:01.014987 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.47s 2026-02-05 00:56:01.014991 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.37s 2026-02-05 00:56:01.014995 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.06s 2026-02-05 00:56:01.014998 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.67s 2026-02-05 00:56:01.015002 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-02-05 00:56:01.015006 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-02-05 00:56:01.015016 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.48s 2026-02-05 00:56:01.015023 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.43s 2026-02-05 00:56:01.015029 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2026-02-05 00:56:01.015354 | orchestrator | 2026-02-05 00:56:01 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state STARTED 2026-02-05 00:56:01.018172 | orchestrator | 2026-02-05 00:56:01 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:01.018330 | orchestrator | 2026-02-05 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:04.060155 | orchestrator | 2026-02-05 00:56:04 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:04.062928 | orchestrator | 2026-02-05 00:56:04 | INFO  | Task 65ae4d20-c7f7-4fd7-9ff5-43e491b7d9e0 is in state SUCCESS 2026-02-05 00:56:04.064215 | orchestrator | 2026-02-05 00:56:04.064264 | orchestrator | 2026-02-05 00:56:04.064283 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-02-05 00:56:04.064292 | orchestrator | 2026-02-05 00:56:04.064298 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-05 00:56:04.064305 | orchestrator | Thursday 05 February 2026 00:53:03 +0000 (0:00:00.087) 0:00:00.087 ***** 2026-02-05 00:56:04.064312 | orchestrator | ok: [localhost] => { 2026-02-05 00:56:04.064319 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-02-05 00:56:04.064326 | orchestrator | } 2026-02-05 00:56:04.064333 | orchestrator | 2026-02-05 00:56:04.064339 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-02-05 00:56:04.064346 | orchestrator | Thursday 05 February 2026 00:53:03 +0000 (0:00:00.036) 0:00:00.124 ***** 2026-02-05 00:56:04.064353 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-02-05 00:56:04.064361 | orchestrator | ...ignoring 2026-02-05 00:56:04.064368 | orchestrator | 2026-02-05 00:56:04.064374 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-02-05 00:56:04.064379 | orchestrator | Thursday 05 February 2026 00:53:06 +0000 (0:00:02.748) 0:00:02.872 ***** 2026-02-05 00:56:04.064383 | orchestrator | skipping: [localhost] 2026-02-05 00:56:04.064388 | orchestrator | 2026-02-05 00:56:04.064517 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-02-05 00:56:04.064523 | orchestrator | Thursday 05 February 2026 00:53:06 +0000 (0:00:00.042) 0:00:02.914 ***** 2026-02-05 00:56:04.064527 | orchestrator | ok: [localhost] 2026-02-05 00:56:04.064530 | orchestrator | 2026-02-05 00:56:04.064534 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:56:04.064538 | orchestrator | 2026-02-05 00:56:04.064542 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:56:04.064545 | orchestrator | Thursday 05 February 2026 00:53:06 +0000 (0:00:00.136) 0:00:03.051 ***** 2026-02-05 00:56:04.064549 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:04.064553 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:04.064557 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:04.064560 | orchestrator | 2026-02-05 00:56:04.064564 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:56:04.064568 | orchestrator | Thursday 05 February 2026 00:53:06 +0000 (0:00:00.265) 0:00:03.316 ***** 2026-02-05 00:56:04.064571 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-05 00:56:04.064575 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-05 00:56:04.064579 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-05 00:56:04.064583 | orchestrator | 2026-02-05 00:56:04.064586 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-05 00:56:04.064604 | orchestrator | 2026-02-05 00:56:04.064611 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-05 00:56:04.064621 | orchestrator | Thursday 05 February 2026 00:53:07 +0000 (0:00:00.502) 0:00:03.819 ***** 2026-02-05 00:56:04.064628 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 00:56:04.064634 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-05 00:56:04.064641 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-05 00:56:04.064648 | orchestrator | 2026-02-05 00:56:04.064679 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 00:56:04.064683 | orchestrator | Thursday 05 February 2026 00:53:07 +0000 (0:00:00.495) 0:00:04.314 ***** 2026-02-05 00:56:04.064687 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:04.064692 | orchestrator | 2026-02-05 00:56:04.064696 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-05 00:56:04.064699 | orchestrator | Thursday 05 February 2026 00:53:08 +0000 (0:00:00.540) 0:00:04.854 ***** 2026-02-05 00:56:04.064724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:56:04.064735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:56:04.064749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:56:04.064756 | orchestrator | 2026-02-05 00:56:04.064770 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-05 00:56:04.064776 | orchestrator | Thursday 05 February 2026 00:53:11 +0000 (0:00:03.187) 0:00:08.041 ***** 2026-02-05 00:56:04.064782 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.064788 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.064794 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:04.064800 | orchestrator | 2026-02-05 00:56:04.064807 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-05 00:56:04.064813 | orchestrator | Thursday 05 February 2026 00:53:12 +0000 (0:00:00.607) 0:00:08.649 ***** 2026-02-05 00:56:04.064818 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.064822 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.064826 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:04.064829 | orchestrator | 2026-02-05 00:56:04.064833 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-05 00:56:04.064837 | orchestrator | Thursday 05 February 2026 00:53:13 +0000 (0:00:01.304) 0:00:09.953 ***** 2026-02-05 00:56:04.064841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:56:04.064854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:56:04.064859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:56:04.064866 | orchestrator | 2026-02-05 00:56:04.064870 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-05 00:56:04.064874 | orchestrator | Thursday 05 February 2026 00:53:16 +0000 (0:00:03.147) 0:00:13.101 ***** 2026-02-05 00:56:04.064878 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.064881 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.064885 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:04.064889 | orchestrator | 2026-02-05 00:56:04.064893 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-05 00:56:04.064896 | orchestrator | Thursday 05 February 2026 00:53:17 +0000 (0:00:01.024) 0:00:14.125 ***** 2026-02-05 00:56:04.064901 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:04.064908 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:04.064913 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:04.064919 | orchestrator | 2026-02-05 00:56:04.064925 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 00:56:04.064931 | orchestrator | Thursday 05 February 2026 00:53:21 +0000 (0:00:04.211) 0:00:18.337 ***** 2026-02-05 00:56:04.064937 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:04.064943 | orchestrator | 2026-02-05 00:56:04.064949 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-05 00:56:04.064955 | orchestrator | Thursday 05 February 2026 00:53:22 +0000 (0:00:00.455) 0:00:18.793 ***** 2026-02-05 00:56:04.064970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:56:04.064985 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.064989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:56:04.064993 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:04.065003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:56:04.065010 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.065015 | orchestrator | 2026-02-05 00:56:04.065021 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-05 00:56:04.065028 | orchestrator | Thursday 05 February 2026 00:53:25 +0000 (0:00:03.189) 0:00:21.982 ***** 2026-02-05 00:56:04.065034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:56:04.065041 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.065055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:56:04.065066 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.065072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:56:04.065080 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:04.065084 | orchestrator | 2026-02-05 00:56:04.065087 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-05 00:56:04.065091 | orchestrator | Thursday 05 February 2026 00:53:28 +0000 (0:00:02.856) 0:00:24.839 ***** 2026-02-05 00:56:04.065100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:56:04.065107 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:04.065111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:56:04.065115 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.065119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:56:04.065126 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.065129 | orchestrator | 2026-02-05 00:56:04.065133 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-05 00:56:04.065137 | orchestrator | Thursday 05 February 2026 00:53:31 +0000 (0:00:02.993) 0:00:27.832 ***** 2026-02-05 00:56:04.065178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:56:04.065188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:56:04.065262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:56:04.065269 | orchestrator | 2026-02-05 00:56:04.065273 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-05 00:56:04.065277 | orchestrator | Thursday 05 February 2026 00:53:35 +0000 (0:00:03.739) 0:00:31.572 ***** 2026-02-05 00:56:04.065281 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:04.065284 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:04.065288 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:04.065292 | orchestrator | 2026-02-05 00:56:04.065296 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-05 00:56:04.065299 | orchestrator | Thursday 05 February 2026 00:53:35 +0000 (0:00:00.900) 0:00:32.472 ***** 2026-02-05 00:56:04.065303 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:04.065307 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:04.065311 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:04.065315 | orchestrator | 2026-02-05 00:56:04.065318 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-05 00:56:04.065322 | orchestrator | Thursday 05 February 2026 00:53:36 +0000 (0:00:00.638) 0:00:33.111 ***** 2026-02-05 00:56:04.065326 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:04.065330 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:04.065333 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:04.065337 | orchestrator | 2026-02-05 00:56:04.065341 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-05 00:56:04.065345 | orchestrator | Thursday 05 February 2026 00:53:36 +0000 (0:00:00.358) 0:00:33.469 ***** 2026-02-05 00:56:04.065349 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-05 00:56:04.065353 | orchestrator | ...ignoring 2026-02-05 00:56:04.065357 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-05 00:56:04.065361 | orchestrator | ...ignoring 2026-02-05 00:56:04.065365 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-05 00:56:04.065371 | orchestrator | ...ignoring 2026-02-05 00:56:04.065375 | orchestrator | 2026-02-05 00:56:04.065379 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-05 00:56:04.065383 | orchestrator | Thursday 05 February 2026 00:53:47 +0000 (0:00:11.006) 0:00:44.475 ***** 2026-02-05 00:56:04.065387 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:04.065390 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:04.065394 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:04.065398 | orchestrator | 2026-02-05 00:56:04.065402 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-05 00:56:04.065405 | orchestrator | Thursday 05 February 2026 00:53:48 +0000 (0:00:00.401) 0:00:44.877 ***** 2026-02-05 00:56:04.065409 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:04.065413 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.065417 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.065420 | orchestrator | 2026-02-05 00:56:04.065424 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-05 00:56:04.065428 | orchestrator | Thursday 05 February 2026 00:53:48 +0000 (0:00:00.642) 0:00:45.519 ***** 2026-02-05 00:56:04.065432 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:04.065435 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.065439 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.065443 | orchestrator | 2026-02-05 00:56:04.065447 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-05 00:56:04.065450 | orchestrator | Thursday 05 February 2026 00:53:49 +0000 (0:00:00.402) 0:00:45.922 ***** 2026-02-05 00:56:04.065454 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:04.065458 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.065462 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.065465 | orchestrator | 2026-02-05 00:56:04.065469 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-05 00:56:04.065477 | orchestrator | Thursday 05 February 2026 00:53:49 +0000 (0:00:00.398) 0:00:46.321 ***** 2026-02-05 00:56:04.065481 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:04.065485 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:04.065489 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:04.065493 | orchestrator | 2026-02-05 00:56:04.065496 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-05 00:56:04.065500 | orchestrator | Thursday 05 February 2026 00:53:50 +0000 (0:00:00.421) 0:00:46.742 ***** 2026-02-05 00:56:04.065504 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:04.065508 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.065511 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.065515 | orchestrator | 2026-02-05 00:56:04.065519 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 00:56:04.065523 | orchestrator | Thursday 05 February 2026 00:53:51 +0000 (0:00:00.858) 0:00:47.600 ***** 2026-02-05 00:56:04.065527 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.065530 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.065534 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-05 00:56:04.065538 | orchestrator | 2026-02-05 00:56:04.065542 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-05 00:56:04.065545 | orchestrator | Thursday 05 February 2026 00:53:51 +0000 (0:00:00.387) 0:00:47.988 ***** 2026-02-05 00:56:04.065549 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:04.065553 | orchestrator | 2026-02-05 00:56:04.065557 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-05 00:56:04.065560 | orchestrator | Thursday 05 February 2026 00:54:01 +0000 (0:00:10.447) 0:00:58.435 ***** 2026-02-05 00:56:04.065564 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:04.065568 | orchestrator | 2026-02-05 00:56:04.065574 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 00:56:04.065580 | orchestrator | Thursday 05 February 2026 00:54:02 +0000 (0:00:00.160) 0:00:58.596 ***** 2026-02-05 00:56:04.065594 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:04.065603 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.065609 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.065616 | orchestrator | 2026-02-05 00:56:04.065622 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-05 00:56:04.065628 | orchestrator | Thursday 05 February 2026 00:54:03 +0000 (0:00:01.000) 0:00:59.597 ***** 2026-02-05 00:56:04.065634 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:04.065641 | orchestrator | 2026-02-05 00:56:04.065647 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-05 00:56:04.065700 | orchestrator | Thursday 05 February 2026 00:54:10 +0000 (0:00:07.539) 0:01:07.136 ***** 2026-02-05 00:56:04.065707 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:04.065714 | orchestrator | 2026-02-05 00:56:04.065720 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-05 00:56:04.065727 | orchestrator | Thursday 05 February 2026 00:54:12 +0000 (0:00:01.620) 0:01:08.757 ***** 2026-02-05 00:56:04.065733 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:04.065740 | orchestrator | 2026-02-05 00:56:04.065747 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-05 00:56:04.065751 | orchestrator | Thursday 05 February 2026 00:54:14 +0000 (0:00:02.241) 0:01:10.998 ***** 2026-02-05 00:56:04.065754 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:04.065758 | orchestrator | 2026-02-05 00:56:04.065762 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-05 00:56:04.065766 | orchestrator | Thursday 05 February 2026 00:54:14 +0000 (0:00:00.108) 0:01:11.107 ***** 2026-02-05 00:56:04.065770 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:04.065776 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.065782 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.065788 | orchestrator | 2026-02-05 00:56:04.065794 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-05 00:56:04.065800 | orchestrator | Thursday 05 February 2026 00:54:14 +0000 (0:00:00.282) 0:01:11.389 ***** 2026-02-05 00:56:04.065806 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:04.065811 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-05 00:56:04.065817 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:04.065824 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:04.065831 | orchestrator | 2026-02-05 00:56:04.065837 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-05 00:56:04.065844 | orchestrator | skipping: no hosts matched 2026-02-05 00:56:04.065850 | orchestrator | 2026-02-05 00:56:04.065856 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-05 00:56:04.065862 | orchestrator | 2026-02-05 00:56:04.065866 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-05 00:56:04.065870 | orchestrator | Thursday 05 February 2026 00:54:15 +0000 (0:00:00.528) 0:01:11.918 ***** 2026-02-05 00:56:04.065874 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:04.065878 | orchestrator | 2026-02-05 00:56:04.065882 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-05 00:56:04.065889 | orchestrator | Thursday 05 February 2026 00:54:33 +0000 (0:00:17.769) 0:01:29.687 ***** 2026-02-05 00:56:04.065895 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:04.065901 | orchestrator | 2026-02-05 00:56:04.065907 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-05 00:56:04.065914 | orchestrator | Thursday 05 February 2026 00:54:48 +0000 (0:00:15.612) 0:01:45.300 ***** 2026-02-05 00:56:04.065920 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:04.065927 | orchestrator | 2026-02-05 00:56:04.065933 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-05 00:56:04.065939 | orchestrator | 2026-02-05 00:56:04.065945 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-05 00:56:04.066056 | orchestrator | Thursday 05 February 2026 00:54:51 +0000 (0:00:02.507) 0:01:47.808 ***** 2026-02-05 00:56:04.066069 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:04.066076 | orchestrator | 2026-02-05 00:56:04.066083 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-05 00:56:04.066096 | orchestrator | Thursday 05 February 2026 00:55:10 +0000 (0:00:19.328) 0:02:07.136 ***** 2026-02-05 00:56:04.066103 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:04.066110 | orchestrator | 2026-02-05 00:56:04.066122 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-05 00:56:04.066172 | orchestrator | Thursday 05 February 2026 00:55:26 +0000 (0:00:15.592) 0:02:22.728 ***** 2026-02-05 00:56:04.066178 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:04.066184 | orchestrator | 2026-02-05 00:56:04.066191 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-05 00:56:04.066197 | orchestrator | 2026-02-05 00:56:04.066204 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-05 00:56:04.066210 | orchestrator | Thursday 05 February 2026 00:55:28 +0000 (0:00:02.486) 0:02:25.215 ***** 2026-02-05 00:56:04.066216 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:04.066222 | orchestrator | 2026-02-05 00:56:04.066228 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-05 00:56:04.066235 | orchestrator | Thursday 05 February 2026 00:55:40 +0000 (0:00:12.021) 0:02:37.237 ***** 2026-02-05 00:56:04.066241 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:04.066247 | orchestrator | 2026-02-05 00:56:04.066254 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-05 00:56:04.066260 | orchestrator | Thursday 05 February 2026 00:55:45 +0000 (0:00:04.622) 0:02:41.859 ***** 2026-02-05 00:56:04.066266 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:04.066273 | orchestrator | 2026-02-05 00:56:04.066279 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-05 00:56:04.066285 | orchestrator | 2026-02-05 00:56:04.066292 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-05 00:56:04.066298 | orchestrator | Thursday 05 February 2026 00:55:48 +0000 (0:00:02.777) 0:02:44.637 ***** 2026-02-05 00:56:04.066305 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:04.066311 | orchestrator | 2026-02-05 00:56:04.066317 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-05 00:56:04.066324 | orchestrator | Thursday 05 February 2026 00:55:48 +0000 (0:00:00.511) 0:02:45.148 ***** 2026-02-05 00:56:04.066330 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.066336 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.066342 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:04.066348 | orchestrator | 2026-02-05 00:56:04.066355 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-05 00:56:04.066361 | orchestrator | Thursday 05 February 2026 00:55:51 +0000 (0:00:02.668) 0:02:47.816 ***** 2026-02-05 00:56:04.066367 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.066374 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.066380 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:04.066387 | orchestrator | 2026-02-05 00:56:04.066393 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-05 00:56:04.066399 | orchestrator | Thursday 05 February 2026 00:55:53 +0000 (0:00:02.634) 0:02:50.451 ***** 2026-02-05 00:56:04.066406 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.066412 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.066418 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:04.066424 | orchestrator | 2026-02-05 00:56:04.066431 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-05 00:56:04.066437 | orchestrator | Thursday 05 February 2026 00:55:56 +0000 (0:00:02.614) 0:02:53.065 ***** 2026-02-05 00:56:04.066443 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.066449 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.066461 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:04.066468 | orchestrator | 2026-02-05 00:56:04.066474 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-05 00:56:04.066480 | orchestrator | Thursday 05 February 2026 00:55:58 +0000 (0:00:02.233) 0:02:55.299 ***** 2026-02-05 00:56:04.066486 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:04.066493 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:04.066499 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:04.066505 | orchestrator | 2026-02-05 00:56:04.066511 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-05 00:56:04.066518 | orchestrator | Thursday 05 February 2026 00:56:01 +0000 (0:00:02.891) 0:02:58.190 ***** 2026-02-05 00:56:04.066524 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:04.066530 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:04.066536 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:04.066542 | orchestrator | 2026-02-05 00:56:04.066549 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:56:04.066555 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-05 00:56:04.066562 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-05 00:56:04.066569 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-05 00:56:04.066576 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-05 00:56:04.066582 | orchestrator | 2026-02-05 00:56:04.066588 | orchestrator | 2026-02-05 00:56:04.066594 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:56:04.066600 | orchestrator | Thursday 05 February 2026 00:56:01 +0000 (0:00:00.329) 0:02:58.519 ***** 2026-02-05 00:56:04.066607 | orchestrator | =============================================================================== 2026-02-05 00:56:04.066613 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.10s 2026-02-05 00:56:04.066619 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.21s 2026-02-05 00:56:04.066629 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.02s 2026-02-05 00:56:04.066639 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.01s 2026-02-05 00:56:04.066646 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.45s 2026-02-05 00:56:04.066663 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.54s 2026-02-05 00:56:04.066670 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.99s 2026-02-05 00:56:04.066677 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.62s 2026-02-05 00:56:04.066683 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.21s 2026-02-05 00:56:04.066689 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.74s 2026-02-05 00:56:04.066696 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.19s 2026-02-05 00:56:04.066702 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.19s 2026-02-05 00:56:04.066708 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.15s 2026-02-05 00:56:04.066714 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.99s 2026-02-05 00:56:04.066720 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.89s 2026-02-05 00:56:04.066726 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.86s 2026-02-05 00:56:04.066733 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.78s 2026-02-05 00:56:04.066744 | orchestrator | Check MariaDB service --------------------------------------------------- 2.75s 2026-02-05 00:56:04.066750 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.67s 2026-02-05 00:56:04.066757 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.63s 2026-02-05 00:56:04.066763 | orchestrator | 2026-02-05 00:56:04 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:04.066769 | orchestrator | 2026-02-05 00:56:04 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:04.066776 | orchestrator | 2026-02-05 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:07.106988 | orchestrator | 2026-02-05 00:56:07 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:07.107780 | orchestrator | 2026-02-05 00:56:07 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:07.108848 | orchestrator | 2026-02-05 00:56:07 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:07.108894 | orchestrator | 2026-02-05 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:10.148141 | orchestrator | 2026-02-05 00:56:10 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:10.150108 | orchestrator | 2026-02-05 00:56:10 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:10.152310 | orchestrator | 2026-02-05 00:56:10 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:10.152377 | orchestrator | 2026-02-05 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:13.188138 | orchestrator | 2026-02-05 00:56:13 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:13.188801 | orchestrator | 2026-02-05 00:56:13 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:13.189830 | orchestrator | 2026-02-05 00:56:13 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:13.189877 | orchestrator | 2026-02-05 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:16.222286 | orchestrator | 2026-02-05 00:56:16 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:16.225187 | orchestrator | 2026-02-05 00:56:16 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:16.226002 | orchestrator | 2026-02-05 00:56:16 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:16.226087 | orchestrator | 2026-02-05 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:19.261001 | orchestrator | 2026-02-05 00:56:19 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:19.261635 | orchestrator | 2026-02-05 00:56:19 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:19.262284 | orchestrator | 2026-02-05 00:56:19 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:19.262318 | orchestrator | 2026-02-05 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:22.298427 | orchestrator | 2026-02-05 00:56:22 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:22.299879 | orchestrator | 2026-02-05 00:56:22 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:22.302503 | orchestrator | 2026-02-05 00:56:22 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:22.302569 | orchestrator | 2026-02-05 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:25.337075 | orchestrator | 2026-02-05 00:56:25 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:25.337204 | orchestrator | 2026-02-05 00:56:25 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:25.337934 | orchestrator | 2026-02-05 00:56:25 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:25.338011 | orchestrator | 2026-02-05 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:28.365985 | orchestrator | 2026-02-05 00:56:28 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:28.367416 | orchestrator | 2026-02-05 00:56:28 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:28.367462 | orchestrator | 2026-02-05 00:56:28 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:28.367472 | orchestrator | 2026-02-05 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:31.394468 | orchestrator | 2026-02-05 00:56:31 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:31.395180 | orchestrator | 2026-02-05 00:56:31 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:31.396408 | orchestrator | 2026-02-05 00:56:31 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:31.396451 | orchestrator | 2026-02-05 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:34.437756 | orchestrator | 2026-02-05 00:56:34 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:34.439306 | orchestrator | 2026-02-05 00:56:34 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:34.440968 | orchestrator | 2026-02-05 00:56:34 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:34.441199 | orchestrator | 2026-02-05 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:37.489611 | orchestrator | 2026-02-05 00:56:37 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:37.490918 | orchestrator | 2026-02-05 00:56:37 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:37.493265 | orchestrator | 2026-02-05 00:56:37 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:37.493935 | orchestrator | 2026-02-05 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:40.535888 | orchestrator | 2026-02-05 00:56:40 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:40.536005 | orchestrator | 2026-02-05 00:56:40 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:40.537128 | orchestrator | 2026-02-05 00:56:40 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:40.537185 | orchestrator | 2026-02-05 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:43.594161 | orchestrator | 2026-02-05 00:56:43 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:43.596080 | orchestrator | 2026-02-05 00:56:43 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:43.598166 | orchestrator | 2026-02-05 00:56:43 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:43.598210 | orchestrator | 2026-02-05 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:46.635513 | orchestrator | 2026-02-05 00:56:46 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:46.637091 | orchestrator | 2026-02-05 00:56:46 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:46.638668 | orchestrator | 2026-02-05 00:56:46 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:46.639231 | orchestrator | 2026-02-05 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:49.689041 | orchestrator | 2026-02-05 00:56:49 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:49.690682 | orchestrator | 2026-02-05 00:56:49 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:49.692974 | orchestrator | 2026-02-05 00:56:49 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:49.693020 | orchestrator | 2026-02-05 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:52.722990 | orchestrator | 2026-02-05 00:56:52 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:52.725913 | orchestrator | 2026-02-05 00:56:52 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:52.726640 | orchestrator | 2026-02-05 00:56:52 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:52.726726 | orchestrator | 2026-02-05 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:55.764290 | orchestrator | 2026-02-05 00:56:55 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:55.769417 | orchestrator | 2026-02-05 00:56:55 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:55.771132 | orchestrator | 2026-02-05 00:56:55 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:55.771208 | orchestrator | 2026-02-05 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:58.816224 | orchestrator | 2026-02-05 00:56:58 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:56:58.818645 | orchestrator | 2026-02-05 00:56:58 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:56:58.820607 | orchestrator | 2026-02-05 00:56:58 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:56:58.820650 | orchestrator | 2026-02-05 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:01.861816 | orchestrator | 2026-02-05 00:57:01 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:01.863836 | orchestrator | 2026-02-05 00:57:01 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:01.865492 | orchestrator | 2026-02-05 00:57:01 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:01.865524 | orchestrator | 2026-02-05 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:04.910200 | orchestrator | 2026-02-05 00:57:04 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:04.912858 | orchestrator | 2026-02-05 00:57:04 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:04.915401 | orchestrator | 2026-02-05 00:57:04 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:04.915473 | orchestrator | 2026-02-05 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:07.954592 | orchestrator | 2026-02-05 00:57:07 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:07.955286 | orchestrator | 2026-02-05 00:57:07 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:07.956505 | orchestrator | 2026-02-05 00:57:07 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:07.956536 | orchestrator | 2026-02-05 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:10.992409 | orchestrator | 2026-02-05 00:57:10 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:10.993444 | orchestrator | 2026-02-05 00:57:10 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:10.993846 | orchestrator | 2026-02-05 00:57:10 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:10.993865 | orchestrator | 2026-02-05 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:14.037146 | orchestrator | 2026-02-05 00:57:14 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:14.039466 | orchestrator | 2026-02-05 00:57:14 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:14.041556 | orchestrator | 2026-02-05 00:57:14 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:14.041603 | orchestrator | 2026-02-05 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:17.077818 | orchestrator | 2026-02-05 00:57:17 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:17.079885 | orchestrator | 2026-02-05 00:57:17 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:17.083350 | orchestrator | 2026-02-05 00:57:17 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:17.083597 | orchestrator | 2026-02-05 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:20.125307 | orchestrator | 2026-02-05 00:57:20 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:20.127127 | orchestrator | 2026-02-05 00:57:20 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:20.128599 | orchestrator | 2026-02-05 00:57:20 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:20.128714 | orchestrator | 2026-02-05 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:23.167965 | orchestrator | 2026-02-05 00:57:23 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:23.169840 | orchestrator | 2026-02-05 00:57:23 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:23.171585 | orchestrator | 2026-02-05 00:57:23 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:23.171639 | orchestrator | 2026-02-05 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:26.212120 | orchestrator | 2026-02-05 00:57:26 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:26.213734 | orchestrator | 2026-02-05 00:57:26 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:26.215490 | orchestrator | 2026-02-05 00:57:26 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:26.215533 | orchestrator | 2026-02-05 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:29.256431 | orchestrator | 2026-02-05 00:57:29 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:29.257787 | orchestrator | 2026-02-05 00:57:29 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:29.259323 | orchestrator | 2026-02-05 00:57:29 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:29.259366 | orchestrator | 2026-02-05 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:32.302430 | orchestrator | 2026-02-05 00:57:32 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:32.304169 | orchestrator | 2026-02-05 00:57:32 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:32.305750 | orchestrator | 2026-02-05 00:57:32 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:32.306832 | orchestrator | 2026-02-05 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:35.349427 | orchestrator | 2026-02-05 00:57:35 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:35.351029 | orchestrator | 2026-02-05 00:57:35 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:35.352432 | orchestrator | 2026-02-05 00:57:35 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:35.352476 | orchestrator | 2026-02-05 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:38.392947 | orchestrator | 2026-02-05 00:57:38 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:38.393988 | orchestrator | 2026-02-05 00:57:38 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:38.395167 | orchestrator | 2026-02-05 00:57:38 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:38.395345 | orchestrator | 2026-02-05 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:41.435992 | orchestrator | 2026-02-05 00:57:41 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:41.439129 | orchestrator | 2026-02-05 00:57:41 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:41.441746 | orchestrator | 2026-02-05 00:57:41 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state STARTED 2026-02-05 00:57:41.441796 | orchestrator | 2026-02-05 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:44.483134 | orchestrator | 2026-02-05 00:57:44 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:44.484936 | orchestrator | 2026-02-05 00:57:44 | INFO  | Task 4f5cc073-0eb9-4c13-ad8b-21bdf2fd307a is in state STARTED 2026-02-05 00:57:44.487192 | orchestrator | 2026-02-05 00:57:44 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:44.491355 | orchestrator | 2026-02-05 00:57:44 | INFO  | Task 3ea8555e-2198-42b3-a8d0-2db9ceaa8a60 is in state SUCCESS 2026-02-05 00:57:44.491407 | orchestrator | 2026-02-05 00:57:44.493093 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 00:57:44.493132 | orchestrator | 2.16.14 2026-02-05 00:57:44.493138 | orchestrator | 2026-02-05 00:57:44.493143 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-05 00:57:44.493148 | orchestrator | 2026-02-05 00:57:44.493152 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 00:57:44.493157 | orchestrator | Thursday 05 February 2026 00:55:32 +0000 (0:00:00.527) 0:00:00.527 ***** 2026-02-05 00:57:44.493161 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:57:44.493166 | orchestrator | 2026-02-05 00:57:44.493188 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 00:57:44.493192 | orchestrator | Thursday 05 February 2026 00:55:33 +0000 (0:00:00.624) 0:00:01.152 ***** 2026-02-05 00:57:44.493196 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.493200 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.493204 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.493277 | orchestrator | 2026-02-05 00:57:44.493282 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 00:57:44.493286 | orchestrator | Thursday 05 February 2026 00:55:34 +0000 (0:00:00.654) 0:00:01.807 ***** 2026-02-05 00:57:44.493395 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.493400 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.493403 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.493407 | orchestrator | 2026-02-05 00:57:44.493411 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 00:57:44.493415 | orchestrator | Thursday 05 February 2026 00:55:34 +0000 (0:00:00.369) 0:00:02.177 ***** 2026-02-05 00:57:44.493418 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.493422 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.493426 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.493429 | orchestrator | 2026-02-05 00:57:44.493433 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 00:57:44.493437 | orchestrator | Thursday 05 February 2026 00:55:35 +0000 (0:00:00.904) 0:00:03.081 ***** 2026-02-05 00:57:44.493441 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.493444 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.493448 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.493452 | orchestrator | 2026-02-05 00:57:44.493455 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 00:57:44.493459 | orchestrator | Thursday 05 February 2026 00:55:35 +0000 (0:00:00.319) 0:00:03.400 ***** 2026-02-05 00:57:44.493463 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.493467 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.493471 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.493474 | orchestrator | 2026-02-05 00:57:44.493478 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 00:57:44.493482 | orchestrator | Thursday 05 February 2026 00:55:35 +0000 (0:00:00.305) 0:00:03.706 ***** 2026-02-05 00:57:44.493485 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.493489 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.493493 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.493496 | orchestrator | 2026-02-05 00:57:44.493500 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 00:57:44.493504 | orchestrator | Thursday 05 February 2026 00:55:36 +0000 (0:00:00.306) 0:00:04.012 ***** 2026-02-05 00:57:44.493508 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.493512 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.493516 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.493520 | orchestrator | 2026-02-05 00:57:44.493524 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 00:57:44.493527 | orchestrator | Thursday 05 February 2026 00:55:36 +0000 (0:00:00.510) 0:00:04.523 ***** 2026-02-05 00:57:44.493531 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.493535 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.493538 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.493542 | orchestrator | 2026-02-05 00:57:44.493546 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 00:57:44.493549 | orchestrator | Thursday 05 February 2026 00:55:37 +0000 (0:00:00.295) 0:00:04.818 ***** 2026-02-05 00:57:44.493553 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:57:44.493557 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:57:44.493561 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:57:44.493564 | orchestrator | 2026-02-05 00:57:44.493574 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 00:57:44.493578 | orchestrator | Thursday 05 February 2026 00:55:37 +0000 (0:00:00.804) 0:00:05.623 ***** 2026-02-05 00:57:44.493582 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.493585 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.493589 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.493593 | orchestrator | 2026-02-05 00:57:44.493597 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 00:57:44.493601 | orchestrator | Thursday 05 February 2026 00:55:38 +0000 (0:00:00.479) 0:00:06.102 ***** 2026-02-05 00:57:44.493604 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:57:44.493608 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:57:44.493612 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:57:44.493616 | orchestrator | 2026-02-05 00:57:44.493629 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 00:57:44.493633 | orchestrator | Thursday 05 February 2026 00:55:40 +0000 (0:00:02.386) 0:00:08.489 ***** 2026-02-05 00:57:44.493637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 00:57:44.493641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 00:57:44.493645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 00:57:44.493680 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.493686 | orchestrator | 2026-02-05 00:57:44.493701 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 00:57:44.493705 | orchestrator | Thursday 05 February 2026 00:55:41 +0000 (0:00:00.699) 0:00:09.188 ***** 2026-02-05 00:57:44.493712 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.493738 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.493743 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.493747 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.493751 | orchestrator | 2026-02-05 00:57:44.493756 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 00:57:44.493760 | orchestrator | Thursday 05 February 2026 00:55:42 +0000 (0:00:00.792) 0:00:09.981 ***** 2026-02-05 00:57:44.493766 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.493774 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.493780 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.493788 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.493793 | orchestrator | 2026-02-05 00:57:44.493798 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 00:57:44.493802 | orchestrator | Thursday 05 February 2026 00:55:42 +0000 (0:00:00.348) 0:00:10.330 ***** 2026-02-05 00:57:44.493810 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f039a3b3d787', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 00:55:39.220459', 'end': '2026-02-05 00:55:39.257730', 'delta': '0:00:00.037271', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f039a3b3d787'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 00:57:44.493820 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '021257df8da3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 00:55:39.959400', 'end': '2026-02-05 00:55:39.992089', 'delta': '0:00:00.032689', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['021257df8da3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 00:57:44.493830 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3992f4b2c9dc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 00:55:40.548960', 'end': '2026-02-05 00:55:40.588741', 'delta': '0:00:00.039781', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3992f4b2c9dc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 00:57:44.493835 | orchestrator | 2026-02-05 00:57:44.493839 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 00:57:44.493844 | orchestrator | Thursday 05 February 2026 00:55:42 +0000 (0:00:00.212) 0:00:10.543 ***** 2026-02-05 00:57:44.493848 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.493852 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.493857 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.493861 | orchestrator | 2026-02-05 00:57:44.493865 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 00:57:44.493870 | orchestrator | Thursday 05 February 2026 00:55:43 +0000 (0:00:00.512) 0:00:11.056 ***** 2026-02-05 00:57:44.493874 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-05 00:57:44.493879 | orchestrator | 2026-02-05 00:57:44.493883 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 00:57:44.493888 | orchestrator | Thursday 05 February 2026 00:55:45 +0000 (0:00:01.803) 0:00:12.859 ***** 2026-02-05 00:57:44.493894 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.493901 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.493911 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.493920 | orchestrator | 2026-02-05 00:57:44.493928 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 00:57:44.493936 | orchestrator | Thursday 05 February 2026 00:55:45 +0000 (0:00:00.373) 0:00:13.232 ***** 2026-02-05 00:57:44.493942 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.493948 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.493954 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.493961 | orchestrator | 2026-02-05 00:57:44.493967 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 00:57:44.493973 | orchestrator | Thursday 05 February 2026 00:55:45 +0000 (0:00:00.422) 0:00:13.655 ***** 2026-02-05 00:57:44.493979 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.493985 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.493991 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.493997 | orchestrator | 2026-02-05 00:57:44.494003 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 00:57:44.494009 | orchestrator | Thursday 05 February 2026 00:55:46 +0000 (0:00:00.469) 0:00:14.124 ***** 2026-02-05 00:57:44.494048 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.494056 | orchestrator | 2026-02-05 00:57:44.494126 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 00:57:44.494133 | orchestrator | Thursday 05 February 2026 00:55:46 +0000 (0:00:00.117) 0:00:14.242 ***** 2026-02-05 00:57:44.494139 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.494144 | orchestrator | 2026-02-05 00:57:44.494150 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 00:57:44.494156 | orchestrator | Thursday 05 February 2026 00:55:46 +0000 (0:00:00.242) 0:00:14.484 ***** 2026-02-05 00:57:44.494161 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.494167 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.494173 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.494179 | orchestrator | 2026-02-05 00:57:44.494185 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 00:57:44.494228 | orchestrator | Thursday 05 February 2026 00:55:47 +0000 (0:00:00.318) 0:00:14.802 ***** 2026-02-05 00:57:44.494233 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.494236 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.494240 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.494244 | orchestrator | 2026-02-05 00:57:44.494248 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 00:57:44.494252 | orchestrator | Thursday 05 February 2026 00:55:47 +0000 (0:00:00.315) 0:00:15.117 ***** 2026-02-05 00:57:44.494256 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.494260 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.494263 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.494267 | orchestrator | 2026-02-05 00:57:44.494271 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 00:57:44.494275 | orchestrator | Thursday 05 February 2026 00:55:47 +0000 (0:00:00.513) 0:00:15.631 ***** 2026-02-05 00:57:44.494278 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.494282 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.494286 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.494290 | orchestrator | 2026-02-05 00:57:44.494293 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 00:57:44.494297 | orchestrator | Thursday 05 February 2026 00:55:48 +0000 (0:00:00.364) 0:00:15.996 ***** 2026-02-05 00:57:44.494306 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.494310 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.494314 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.494318 | orchestrator | 2026-02-05 00:57:44.494323 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 00:57:44.494329 | orchestrator | Thursday 05 February 2026 00:55:48 +0000 (0:00:00.294) 0:00:16.290 ***** 2026-02-05 00:57:44.494368 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.494376 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.494382 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.494389 | orchestrator | 2026-02-05 00:57:44.494404 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 00:57:44.494411 | orchestrator | Thursday 05 February 2026 00:55:48 +0000 (0:00:00.360) 0:00:16.651 ***** 2026-02-05 00:57:44.494418 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.494425 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.494432 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.494439 | orchestrator | 2026-02-05 00:57:44.494445 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 00:57:44.494453 | orchestrator | Thursday 05 February 2026 00:55:49 +0000 (0:00:00.508) 0:00:17.160 ***** 2026-02-05 00:57:44.494461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3e842383--5890--511f--b982--bff6d8042060-osd--block--3e842383--5890--511f--b982--bff6d8042060', 'dm-uuid-LVM-feYzPNgm7J2XpMW7Ydk9y2b5fFw5ZIRRIiotHRlVte350u57D33HOu7VVPdb83XH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--22ded513--57d8--573e--a796--c8381d672537-osd--block--22ded513--57d8--573e--a796--c8381d672537', 'dm-uuid-LVM-uFzvBTKpmUAt8VIYysGz41q3AIABZs8JooEhyqZtqHh2f1cnjHkA9h5UPVxA9fNA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.494570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--159372f8--6c52--51f3--a9af--3fbf7ffb45fe-osd--block--159372f8--6c52--51f3--a9af--3fbf7ffb45fe', 'dm-uuid-LVM-QpOrriM4HXirfF1rs1OzVygGinYcii5FYBFOD50VyFUpoK8Z5nC1vL3lA4GQepGI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3e842383--5890--511f--b982--bff6d8042060-osd--block--3e842383--5890--511f--b982--bff6d8042060'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CXZdc2-YWxI-0CJn-7isE-dwsd-qDH3-XuWeVU', 'scsi-0QEMU_QEMU_HARDDISK_d601120f-cbb3-4953-a30b-917ccea713c0', 'scsi-SQEMU_QEMU_HARDDISK_d601120f-cbb3-4953-a30b-917ccea713c0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.494604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--523b4628--8322--5ebe--8cc3--60a2eeaa41a5-osd--block--523b4628--8322--5ebe--8cc3--60a2eeaa41a5', 'dm-uuid-LVM-gthMeB4bH1NEmx1lNJOfN6HdjDxRPaoOR3G4GdZjZCGPLbrkG1n1uydrTmelXG1F'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--22ded513--57d8--573e--a796--c8381d672537-osd--block--22ded513--57d8--573e--a796--c8381d672537'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LrQnMQ-DAfw-CZQO-oCUP-6BZ2-It7W-7UQ90E', 'scsi-0QEMU_QEMU_HARDDISK_0f4e2151-cc71-4085-93f0-18395b8a78d9', 'scsi-SQEMU_QEMU_HARDDISK_0f4e2151-cc71-4085-93f0-18395b8a78d9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.494619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6da1746-b16d-4279-a6c0-a95c954f705d', 'scsi-SQEMU_QEMU_HARDDISK_e6da1746-b16d-4279-a6c0-a95c954f705d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.494636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.494685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.494731 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.494747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part1', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part14', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part15', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part16', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.494762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--159372f8--6c52--51f3--a9af--3fbf7ffb45fe-osd--block--159372f8--6c52--51f3--a9af--3fbf7ffb45fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QLHj76-cRd3-Fq33-c8yU-BNkP-1i3U-MwSVlt', 'scsi-0QEMU_QEMU_HARDDISK_c8222ed3-0da2-4bb4-b170-21b6f36ecb8d', 'scsi-SQEMU_QEMU_HARDDISK_c8222ed3-0da2-4bb4-b170-21b6f36ecb8d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.494793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--523b4628--8322--5ebe--8cc3--60a2eeaa41a5-osd--block--523b4628--8322--5ebe--8cc3--60a2eeaa41a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-L3DBsr-6x7z-Ycn5-m8Xw-A3yt-yMAu-IATD8q', 'scsi-0QEMU_QEMU_HARDDISK_b7f472c8-b527-47c9-ac56-62f6f3e84fbf', 'scsi-SQEMU_QEMU_HARDDISK_b7f472c8-b527-47c9-ac56-62f6f3e84fbf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.494799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3293e5b-f1f9-462e-9781-4b1b679aef30', 'scsi-SQEMU_QEMU_HARDDISK_a3293e5b-f1f9-462e-9781-4b1b679aef30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.494806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.494817 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.494824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3edfc207--63bb--5e8f--b635--306c655bc02c-osd--block--3edfc207--63bb--5e8f--b635--306c655bc02c', 'dm-uuid-LVM-MUghBa6PcrCydaFvfG0TUOZ9glQ5zyP1N3lbXM9MZ3ncFyWh0RzPsE3Ya86hIsTB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.495232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--121c279b--9e45--54e8--9359--e1d452607edd-osd--block--121c279b--9e45--54e8--9359--e1d452607edd', 'dm-uuid-LVM-KAxqEhc8qSlu2zzfQu7TSpQJv2qMiOvrq391tuAhjZUKzI0s1g6oimAMe8Junomx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.495249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.495254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.495258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.495262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.495266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.495270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.495279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.495283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:57:44.495296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.495302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3edfc207--63bb--5e8f--b635--306c655bc02c-osd--block--3edfc207--63bb--5e8f--b635--306c655bc02c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5YL8UG-yANo-1ams-B1Eb-5Hxa-zRuW-Qi1SZF', 'scsi-0QEMU_QEMU_HARDDISK_9acd2af8-1818-4377-bd1d-628102e352cb', 'scsi-SQEMU_QEMU_HARDDISK_9acd2af8-1818-4377-bd1d-628102e352cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.495309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--121c279b--9e45--54e8--9359--e1d452607edd-osd--block--121c279b--9e45--54e8--9359--e1d452607edd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Su7pbR-s8e6-pX8N-XCbj-4Jul-MEPJ-wrAfd4', 'scsi-0QEMU_QEMU_HARDDISK_33f37d33-b22b-44c3-8624-6074b4bf08c3', 'scsi-SQEMU_QEMU_HARDDISK_33f37d33-b22b-44c3-8624-6074b4bf08c3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.495316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f67b6e9-f99c-4354-902d-31e3a3988722', 'scsi-SQEMU_QEMU_HARDDISK_7f67b6e9-f99c-4354-902d-31e3a3988722'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.495326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:57:44.495330 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.495334 | orchestrator | 2026-02-05 00:57:44.495338 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 00:57:44.495341 | orchestrator | Thursday 05 February 2026 00:55:49 +0000 (0:00:00.571) 0:00:17.731 ***** 2026-02-05 00:57:44.495347 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3e842383--5890--511f--b982--bff6d8042060-osd--block--3e842383--5890--511f--b982--bff6d8042060', 'dm-uuid-LVM-feYzPNgm7J2XpMW7Ydk9y2b5fFw5ZIRRIiotHRlVte350u57D33HOu7VVPdb83XH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495352 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--22ded513--57d8--573e--a796--c8381d672537-osd--block--22ded513--57d8--573e--a796--c8381d672537', 'dm-uuid-LVM-uFzvBTKpmUAt8VIYysGz41q3AIABZs8JooEhyqZtqHh2f1cnjHkA9h5UPVxA9fNA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495356 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495364 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495370 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495379 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495387 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--159372f8--6c52--51f3--a9af--3fbf7ffb45fe-osd--block--159372f8--6c52--51f3--a9af--3fbf7ffb45fe', 'dm-uuid-LVM-QpOrriM4HXirfF1rs1OzVygGinYcii5FYBFOD50VyFUpoK8Z5nC1vL3lA4GQepGI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495411 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--523b4628--8322--5ebe--8cc3--60a2eeaa41a5-osd--block--523b4628--8322--5ebe--8cc3--60a2eeaa41a5', 'dm-uuid-LVM-gthMeB4bH1NEmx1lNJOfN6HdjDxRPaoOR3G4GdZjZCGPLbrkG1n1uydrTmelXG1F'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495416 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part1', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part14', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part15', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part16', 'scsi-SQEMU_QEMU_HARDDISK_b27e4124-45c4-4fd4-ab4b-3fe5b3c0167f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495426 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495435 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3e842383--5890--511f--b982--bff6d8042060-osd--block--3e842383--5890--511f--b982--bff6d8042060'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CXZdc2-YWxI-0CJn-7isE-dwsd-qDH3-XuWeVU', 'scsi-0QEMU_QEMU_HARDDISK_d601120f-cbb3-4953-a30b-917ccea713c0', 'scsi-SQEMU_QEMU_HARDDISK_d601120f-cbb3-4953-a30b-917ccea713c0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--22ded513--57d8--573e--a796--c8381d672537-osd--block--22ded513--57d8--573e--a796--c8381d672537'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LrQnMQ-DAfw-CZQO-oCUP-6BZ2-It7W-7UQ90E', 'scsi-0QEMU_QEMU_HARDDISK_0f4e2151-cc71-4085-93f0-18395b8a78d9', 'scsi-SQEMU_QEMU_HARDDISK_0f4e2151-cc71-4085-93f0-18395b8a78d9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495451 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495454 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6da1746-b16d-4279-a6c0-a95c954f705d', 'scsi-SQEMU_QEMU_HARDDISK_e6da1746-b16d-4279-a6c0-a95c954f705d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495458 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495467 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495472 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495476 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495484 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.495488 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495492 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495501 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part1', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part14', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part15', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part16', 'scsi-SQEMU_QEMU_HARDDISK_249af197-fbc4-4070-877c-ae28488f0fb3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495506 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--159372f8--6c52--51f3--a9af--3fbf7ffb45fe-osd--block--159372f8--6c52--51f3--a9af--3fbf7ffb45fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QLHj76-cRd3-Fq33-c8yU-BNkP-1i3U-MwSVlt', 'scsi-0QEMU_QEMU_HARDDISK_c8222ed3-0da2-4bb4-b170-21b6f36ecb8d', 'scsi-SQEMU_QEMU_HARDDISK_c8222ed3-0da2-4bb4-b170-21b6f36ecb8d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495514 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--523b4628--8322--5ebe--8cc3--60a2eeaa41a5-osd--block--523b4628--8322--5ebe--8cc3--60a2eeaa41a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-L3DBsr-6x7z-Ycn5-m8Xw-A3yt-yMAu-IATD8q', 'scsi-0QEMU_QEMU_HARDDISK_b7f472c8-b527-47c9-ac56-62f6f3e84fbf', 'scsi-SQEMU_QEMU_HARDDISK_b7f472c8-b527-47c9-ac56-62f6f3e84fbf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495521 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3293e5b-f1f9-462e-9781-4b1b679aef30', 'scsi-SQEMU_QEMU_HARDDISK_a3293e5b-f1f9-462e-9781-4b1b679aef30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495528 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3edfc207--63bb--5e8f--b635--306c655bc02c-osd--block--3edfc207--63bb--5e8f--b635--306c655bc02c', 'dm-uuid-LVM-MUghBa6PcrCydaFvfG0TUOZ9glQ5zyP1N3lbXM9MZ3ncFyWh0RzPsE3Ya86hIsTB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495532 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495536 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--121c279b--9e45--54e8--9359--e1d452607edd-osd--block--121c279b--9e45--54e8--9359--e1d452607edd', 'dm-uuid-LVM-KAxqEhc8qSlu2zzfQu7TSpQJv2qMiOvrq391tuAhjZUKzI0s1g6oimAMe8Junomx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495543 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.495547 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495551 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495557 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495565 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495569 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495573 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495580 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495584 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495593 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae798e57-6294-4077-9df2-d289d5b267fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495600 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3edfc207--63bb--5e8f--b635--306c655bc02c-osd--block--3edfc207--63bb--5e8f--b635--306c655bc02c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5YL8UG-yANo-1ams-B1Eb-5Hxa-zRuW-Qi1SZF', 'scsi-0QEMU_QEMU_HARDDISK_9acd2af8-1818-4377-bd1d-628102e352cb', 'scsi-SQEMU_QEMU_HARDDISK_9acd2af8-1818-4377-bd1d-628102e352cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495604 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--121c279b--9e45--54e8--9359--e1d452607edd-osd--block--121c279b--9e45--54e8--9359--e1d452607edd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Su7pbR-s8e6-pX8N-XCbj-4Jul-MEPJ-wrAfd4', 'scsi-0QEMU_QEMU_HARDDISK_33f37d33-b22b-44c3-8624-6074b4bf08c3', 'scsi-SQEMU_QEMU_HARDDISK_33f37d33-b22b-44c3-8624-6074b4bf08c3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495610 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f67b6e9-f99c-4354-902d-31e3a3988722', 'scsi-SQEMU_QEMU_HARDDISK_7f67b6e9-f99c-4354-902d-31e3a3988722'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495616 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:57:44.495620 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.495624 | orchestrator | 2026-02-05 00:57:44.495628 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 00:57:44.495632 | orchestrator | Thursday 05 February 2026 00:55:50 +0000 (0:00:00.627) 0:00:18.359 ***** 2026-02-05 00:57:44.495635 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.495640 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.495646 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.495668 | orchestrator | 2026-02-05 00:57:44.495675 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 00:57:44.495681 | orchestrator | Thursday 05 February 2026 00:55:51 +0000 (0:00:00.723) 0:00:19.082 ***** 2026-02-05 00:57:44.495686 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.495692 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.495698 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.495703 | orchestrator | 2026-02-05 00:57:44.495709 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 00:57:44.495713 | orchestrator | Thursday 05 February 2026 00:55:51 +0000 (0:00:00.473) 0:00:19.556 ***** 2026-02-05 00:57:44.495717 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.495721 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.495727 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.495733 | orchestrator | 2026-02-05 00:57:44.495738 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 00:57:44.495744 | orchestrator | Thursday 05 February 2026 00:55:52 +0000 (0:00:00.677) 0:00:20.233 ***** 2026-02-05 00:57:44.495750 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.495755 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.495761 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.495767 | orchestrator | 2026-02-05 00:57:44.495772 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 00:57:44.495777 | orchestrator | Thursday 05 February 2026 00:55:52 +0000 (0:00:00.260) 0:00:20.494 ***** 2026-02-05 00:57:44.495783 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.495787 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.495791 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.495795 | orchestrator | 2026-02-05 00:57:44.495798 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 00:57:44.495802 | orchestrator | Thursday 05 February 2026 00:55:53 +0000 (0:00:00.360) 0:00:20.854 ***** 2026-02-05 00:57:44.495806 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.495809 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.495813 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.495817 | orchestrator | 2026-02-05 00:57:44.495821 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 00:57:44.495824 | orchestrator | Thursday 05 February 2026 00:55:53 +0000 (0:00:00.403) 0:00:21.258 ***** 2026-02-05 00:57:44.495828 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-05 00:57:44.495833 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-05 00:57:44.495836 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-05 00:57:44.495840 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-05 00:57:44.495844 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-05 00:57:44.495848 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-05 00:57:44.495852 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-05 00:57:44.495857 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-05 00:57:44.495861 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-05 00:57:44.495865 | orchestrator | 2026-02-05 00:57:44.495869 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 00:57:44.495874 | orchestrator | Thursday 05 February 2026 00:55:54 +0000 (0:00:00.856) 0:00:22.114 ***** 2026-02-05 00:57:44.495878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 00:57:44.495882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 00:57:44.495887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 00:57:44.495891 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.495895 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 00:57:44.495899 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 00:57:44.495910 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 00:57:44.495914 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.495918 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 00:57:44.495923 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 00:57:44.495927 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 00:57:44.495931 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.495935 | orchestrator | 2026-02-05 00:57:44.495939 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 00:57:44.495944 | orchestrator | Thursday 05 February 2026 00:55:54 +0000 (0:00:00.319) 0:00:22.434 ***** 2026-02-05 00:57:44.495952 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:57:44.495957 | orchestrator | 2026-02-05 00:57:44.495961 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 00:57:44.495967 | orchestrator | Thursday 05 February 2026 00:55:55 +0000 (0:00:00.578) 0:00:23.013 ***** 2026-02-05 00:57:44.495974 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.495979 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.495983 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.495987 | orchestrator | 2026-02-05 00:57:44.495991 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 00:57:44.495996 | orchestrator | Thursday 05 February 2026 00:55:55 +0000 (0:00:00.288) 0:00:23.301 ***** 2026-02-05 00:57:44.496000 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.496004 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.496008 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.496013 | orchestrator | 2026-02-05 00:57:44.496017 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 00:57:44.496021 | orchestrator | Thursday 05 February 2026 00:55:55 +0000 (0:00:00.292) 0:00:23.594 ***** 2026-02-05 00:57:44.496026 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.496030 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.496034 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:57:44.496039 | orchestrator | 2026-02-05 00:57:44.496043 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 00:57:44.496048 | orchestrator | Thursday 05 February 2026 00:55:56 +0000 (0:00:00.272) 0:00:23.867 ***** 2026-02-05 00:57:44.496052 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.496056 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.496060 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.496065 | orchestrator | 2026-02-05 00:57:44.496069 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 00:57:44.496073 | orchestrator | Thursday 05 February 2026 00:55:56 +0000 (0:00:00.690) 0:00:24.558 ***** 2026-02-05 00:57:44.496078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:57:44.496082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:57:44.496087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:57:44.496091 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.496095 | orchestrator | 2026-02-05 00:57:44.496100 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 00:57:44.496104 | orchestrator | Thursday 05 February 2026 00:55:57 +0000 (0:00:00.388) 0:00:24.946 ***** 2026-02-05 00:57:44.496108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:57:44.496112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:57:44.496116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:57:44.496121 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.496125 | orchestrator | 2026-02-05 00:57:44.496129 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 00:57:44.496137 | orchestrator | Thursday 05 February 2026 00:55:57 +0000 (0:00:00.387) 0:00:25.334 ***** 2026-02-05 00:57:44.496142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:57:44.496146 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:57:44.496150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:57:44.496154 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.496159 | orchestrator | 2026-02-05 00:57:44.496163 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 00:57:44.496168 | orchestrator | Thursday 05 February 2026 00:55:57 +0000 (0:00:00.376) 0:00:25.711 ***** 2026-02-05 00:57:44.496172 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:57:44.496176 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:57:44.496181 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:57:44.496185 | orchestrator | 2026-02-05 00:57:44.496190 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 00:57:44.496194 | orchestrator | Thursday 05 February 2026 00:55:58 +0000 (0:00:00.336) 0:00:26.047 ***** 2026-02-05 00:57:44.496198 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 00:57:44.496202 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 00:57:44.496207 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 00:57:44.496212 | orchestrator | 2026-02-05 00:57:44.496216 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 00:57:44.496220 | orchestrator | Thursday 05 February 2026 00:55:58 +0000 (0:00:00.470) 0:00:26.518 ***** 2026-02-05 00:57:44.496225 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:57:44.496229 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:57:44.496234 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:57:44.496238 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 00:57:44.496242 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 00:57:44.496246 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 00:57:44.496250 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 00:57:44.496253 | orchestrator | 2026-02-05 00:57:44.496257 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 00:57:44.496261 | orchestrator | Thursday 05 February 2026 00:55:59 +0000 (0:00:00.868) 0:00:27.387 ***** 2026-02-05 00:57:44.496264 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:57:44.496271 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:57:44.496275 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:57:44.496278 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 00:57:44.496282 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 00:57:44.496286 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 00:57:44.496291 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 00:57:44.496295 | orchestrator | 2026-02-05 00:57:44.496299 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-05 00:57:44.496304 | orchestrator | Thursday 05 February 2026 00:56:01 +0000 (0:00:01.777) 0:00:29.164 ***** 2026-02-05 00:57:44.496310 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:57:44.496316 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:57:44.496321 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-05 00:57:44.496327 | orchestrator | 2026-02-05 00:57:44.496332 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-05 00:57:44.496341 | orchestrator | Thursday 05 February 2026 00:56:01 +0000 (0:00:00.340) 0:00:29.504 ***** 2026-02-05 00:57:44.496348 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:57:44.496355 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:57:44.496362 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:57:44.496369 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:57:44.496375 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:57:44.496381 | orchestrator | 2026-02-05 00:57:44.496387 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-05 00:57:44.496393 | orchestrator | Thursday 05 February 2026 00:56:46 +0000 (0:00:45.179) 0:01:14.684 ***** 2026-02-05 00:57:44.496399 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496405 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496411 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496417 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496421 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496424 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496428 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-05 00:57:44.496432 | orchestrator | 2026-02-05 00:57:44.496435 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-05 00:57:44.496439 | orchestrator | Thursday 05 February 2026 00:57:10 +0000 (0:00:23.883) 0:01:38.568 ***** 2026-02-05 00:57:44.496443 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496446 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496450 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496454 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496458 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496461 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496465 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:57:44.496469 | orchestrator | 2026-02-05 00:57:44.496473 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-05 00:57:44.496476 | orchestrator | Thursday 05 February 2026 00:57:23 +0000 (0:00:12.574) 0:01:51.142 ***** 2026-02-05 00:57:44.496482 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496490 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:57:44.496494 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:57:44.496497 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496501 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:57:44.496508 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:57:44.496512 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496515 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:57:44.496519 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:57:44.496523 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496526 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:57:44.496530 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:57:44.496534 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496538 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:57:44.496541 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:57:44.496545 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:57:44.496549 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:57:44.496552 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:57:44.496556 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-05 00:57:44.496560 | orchestrator | 2026-02-05 00:57:44.496564 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:57:44.496568 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-05 00:57:44.496573 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-05 00:57:44.496577 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-05 00:57:44.496581 | orchestrator | 2026-02-05 00:57:44.496585 | orchestrator | 2026-02-05 00:57:44.496588 | orchestrator | 2026-02-05 00:57:44.496592 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:57:44.496597 | orchestrator | Thursday 05 February 2026 00:57:41 +0000 (0:00:17.930) 0:02:09.072 ***** 2026-02-05 00:57:44.496600 | orchestrator | =============================================================================== 2026-02-05 00:57:44.496604 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.18s 2026-02-05 00:57:44.496608 | orchestrator | generate keys ---------------------------------------------------------- 23.88s 2026-02-05 00:57:44.496612 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.93s 2026-02-05 00:57:44.496615 | orchestrator | get keys from monitors ------------------------------------------------- 12.57s 2026-02-05 00:57:44.496619 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.39s 2026-02-05 00:57:44.496623 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.80s 2026-02-05 00:57:44.496627 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.78s 2026-02-05 00:57:44.496630 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.90s 2026-02-05 00:57:44.496634 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.87s 2026-02-05 00:57:44.496641 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.86s 2026-02-05 00:57:44.496644 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.80s 2026-02-05 00:57:44.496676 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.79s 2026-02-05 00:57:44.496682 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2026-02-05 00:57:44.496686 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.70s 2026-02-05 00:57:44.496691 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.69s 2026-02-05 00:57:44.496697 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2026-02-05 00:57:44.496704 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.65s 2026-02-05 00:57:44.496710 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.63s 2026-02-05 00:57:44.496716 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.62s 2026-02-05 00:57:44.496722 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.58s 2026-02-05 00:57:44.496728 | orchestrator | 2026-02-05 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:47.539577 | orchestrator | 2026-02-05 00:57:47 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state STARTED 2026-02-05 00:57:47.541684 | orchestrator | 2026-02-05 00:57:47 | INFO  | Task 4f5cc073-0eb9-4c13-ad8b-21bdf2fd307a is in state STARTED 2026-02-05 00:57:47.543078 | orchestrator | 2026-02-05 00:57:47 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:47.543113 | orchestrator | 2026-02-05 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:50.587516 | orchestrator | 2026-02-05 00:57:50 | INFO  | Task 7efe37f4-cae8-400a-bc13-da083fe1b6c4 is in state SUCCESS 2026-02-05 00:57:50.588343 | orchestrator | 2026-02-05 00:57:50.588384 | orchestrator | 2026-02-05 00:57:50.588392 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:57:50.588400 | orchestrator | 2026-02-05 00:57:50.588457 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:57:50.588559 | orchestrator | Thursday 05 February 2026 00:56:06 +0000 (0:00:00.246) 0:00:00.246 ***** 2026-02-05 00:57:50.588569 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:50.588580 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:50.588586 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:50.588592 | orchestrator | 2026-02-05 00:57:50.588597 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:57:50.588604 | orchestrator | Thursday 05 February 2026 00:56:06 +0000 (0:00:00.263) 0:00:00.510 ***** 2026-02-05 00:57:50.588610 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-05 00:57:50.588618 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-05 00:57:50.588624 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-05 00:57:50.588629 | orchestrator | 2026-02-05 00:57:50.588635 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-05 00:57:50.588641 | orchestrator | 2026-02-05 00:57:50.588647 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 00:57:50.588714 | orchestrator | Thursday 05 February 2026 00:56:06 +0000 (0:00:00.364) 0:00:00.874 ***** 2026-02-05 00:57:50.588721 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:57:50.588729 | orchestrator | 2026-02-05 00:57:50.588736 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-05 00:57:50.588743 | orchestrator | Thursday 05 February 2026 00:56:07 +0000 (0:00:00.456) 0:00:01.331 ***** 2026-02-05 00:57:50.588756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:57:50.588818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:57:50.588836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:57:50.588843 | orchestrator | 2026-02-05 00:57:50.588849 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-05 00:57:50.588855 | orchestrator | Thursday 05 February 2026 00:56:08 +0000 (0:00:00.984) 0:00:02.316 ***** 2026-02-05 00:57:50.588861 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:50.588867 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:50.588873 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:50.588878 | orchestrator | 2026-02-05 00:57:50.588884 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 00:57:50.588890 | orchestrator | Thursday 05 February 2026 00:56:08 +0000 (0:00:00.358) 0:00:02.674 ***** 2026-02-05 00:57:50.588905 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-05 00:57:50.588910 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-05 00:57:50.588914 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-05 00:57:50.588917 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-05 00:57:50.588921 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-05 00:57:50.588925 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-05 00:57:50.588929 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-05 00:57:50.588933 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-05 00:57:50.588936 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-05 00:57:50.588944 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-05 00:57:50.588948 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-05 00:57:50.588952 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-05 00:57:50.588955 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-05 00:57:50.588959 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-05 00:57:50.588964 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-05 00:57:50.588970 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-05 00:57:50.588976 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-05 00:57:50.588983 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-05 00:57:50.588989 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-05 00:57:50.589069 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-05 00:57:50.589076 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-05 00:57:50.589080 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-05 00:57:50.589084 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-05 00:57:50.589088 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-05 00:57:50.589092 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-05 00:57:50.589098 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-05 00:57:50.589102 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-05 00:57:50.589106 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-05 00:57:50.589110 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-05 00:57:50.589113 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-05 00:57:50.589117 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-05 00:57:50.589121 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-05 00:57:50.589125 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-05 00:57:50.589133 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-05 00:57:50.589137 | orchestrator | 2026-02-05 00:57:50.589141 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:57:50.589145 | orchestrator | Thursday 05 February 2026 00:56:09 +0000 (0:00:00.692) 0:00:03.367 ***** 2026-02-05 00:57:50.589149 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:50.589152 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:50.589161 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:50.589165 | orchestrator | 2026-02-05 00:57:50.589169 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:57:50.589172 | orchestrator | Thursday 05 February 2026 00:56:09 +0000 (0:00:00.268) 0:00:03.635 ***** 2026-02-05 00:57:50.589181 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589186 | orchestrator | 2026-02-05 00:57:50.589190 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:57:50.589194 | orchestrator | Thursday 05 February 2026 00:56:09 +0000 (0:00:00.112) 0:00:03.748 ***** 2026-02-05 00:57:50.589198 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589202 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.589205 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.589209 | orchestrator | 2026-02-05 00:57:50.589213 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:57:50.589217 | orchestrator | Thursday 05 February 2026 00:56:09 +0000 (0:00:00.351) 0:00:04.099 ***** 2026-02-05 00:57:50.589221 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:50.589224 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:50.589230 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:50.589236 | orchestrator | 2026-02-05 00:57:50.589242 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:57:50.589248 | orchestrator | Thursday 05 February 2026 00:56:10 +0000 (0:00:00.271) 0:00:04.370 ***** 2026-02-05 00:57:50.589257 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589264 | orchestrator | 2026-02-05 00:57:50.589272 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:57:50.589278 | orchestrator | Thursday 05 February 2026 00:56:10 +0000 (0:00:00.096) 0:00:04.467 ***** 2026-02-05 00:57:50.589282 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589286 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.589289 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.589293 | orchestrator | 2026-02-05 00:57:50.589297 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:57:50.589300 | orchestrator | Thursday 05 February 2026 00:56:10 +0000 (0:00:00.248) 0:00:04.716 ***** 2026-02-05 00:57:50.589304 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:50.589308 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:50.589314 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:50.589320 | orchestrator | 2026-02-05 00:57:50.589325 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:57:50.589332 | orchestrator | Thursday 05 February 2026 00:56:10 +0000 (0:00:00.266) 0:00:04.982 ***** 2026-02-05 00:57:50.589338 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589346 | orchestrator | 2026-02-05 00:57:50.589353 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:57:50.589360 | orchestrator | Thursday 05 February 2026 00:56:10 +0000 (0:00:00.127) 0:00:05.109 ***** 2026-02-05 00:57:50.589364 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589368 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.589372 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.589376 | orchestrator | 2026-02-05 00:57:50.589379 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:57:50.589383 | orchestrator | Thursday 05 February 2026 00:56:11 +0000 (0:00:00.495) 0:00:05.604 ***** 2026-02-05 00:57:50.589387 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:50.589392 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:50.589397 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:50.589403 | orchestrator | 2026-02-05 00:57:50.589410 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:57:50.589415 | orchestrator | Thursday 05 February 2026 00:56:11 +0000 (0:00:00.303) 0:00:05.908 ***** 2026-02-05 00:57:50.589421 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589427 | orchestrator | 2026-02-05 00:57:50.589432 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:57:50.589443 | orchestrator | Thursday 05 February 2026 00:56:11 +0000 (0:00:00.135) 0:00:06.044 ***** 2026-02-05 00:57:50.589449 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589455 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.589461 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.589467 | orchestrator | 2026-02-05 00:57:50.589473 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:57:50.589481 | orchestrator | Thursday 05 February 2026 00:56:12 +0000 (0:00:00.295) 0:00:06.339 ***** 2026-02-05 00:57:50.589488 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:50.589492 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:50.589496 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:50.589500 | orchestrator | 2026-02-05 00:57:50.589504 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:57:50.589508 | orchestrator | Thursday 05 February 2026 00:56:12 +0000 (0:00:00.280) 0:00:06.619 ***** 2026-02-05 00:57:50.589511 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589515 | orchestrator | 2026-02-05 00:57:50.589519 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:57:50.589523 | orchestrator | Thursday 05 February 2026 00:56:12 +0000 (0:00:00.217) 0:00:06.837 ***** 2026-02-05 00:57:50.589526 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589530 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.589534 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.589538 | orchestrator | 2026-02-05 00:57:50.589541 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:57:50.589545 | orchestrator | Thursday 05 February 2026 00:56:12 +0000 (0:00:00.262) 0:00:07.099 ***** 2026-02-05 00:57:50.589549 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:50.589552 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:50.589556 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:50.589560 | orchestrator | 2026-02-05 00:57:50.589564 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:57:50.589570 | orchestrator | Thursday 05 February 2026 00:56:13 +0000 (0:00:00.263) 0:00:07.362 ***** 2026-02-05 00:57:50.589574 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589578 | orchestrator | 2026-02-05 00:57:50.589581 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:57:50.589585 | orchestrator | Thursday 05 February 2026 00:56:13 +0000 (0:00:00.131) 0:00:07.494 ***** 2026-02-05 00:57:50.589589 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589593 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.589596 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.589600 | orchestrator | 2026-02-05 00:57:50.589604 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:57:50.589612 | orchestrator | Thursday 05 February 2026 00:56:13 +0000 (0:00:00.245) 0:00:07.739 ***** 2026-02-05 00:57:50.589616 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:50.589620 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:50.589623 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:50.589627 | orchestrator | 2026-02-05 00:57:50.589631 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:57:50.589634 | orchestrator | Thursday 05 February 2026 00:56:13 +0000 (0:00:00.411) 0:00:08.151 ***** 2026-02-05 00:57:50.589638 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589643 | orchestrator | 2026-02-05 00:57:50.589678 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:57:50.589685 | orchestrator | Thursday 05 February 2026 00:56:14 +0000 (0:00:00.107) 0:00:08.258 ***** 2026-02-05 00:57:50.589691 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589696 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.589702 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.589708 | orchestrator | 2026-02-05 00:57:50.589715 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:57:50.589733 | orchestrator | Thursday 05 February 2026 00:56:14 +0000 (0:00:00.266) 0:00:08.524 ***** 2026-02-05 00:57:50.589739 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:50.589747 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:50.589754 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:50.589762 | orchestrator | 2026-02-05 00:57:50.589769 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:57:50.589774 | orchestrator | Thursday 05 February 2026 00:56:14 +0000 (0:00:00.260) 0:00:08.785 ***** 2026-02-05 00:57:50.589779 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589783 | orchestrator | 2026-02-05 00:57:50.589787 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:57:50.589792 | orchestrator | Thursday 05 February 2026 00:56:14 +0000 (0:00:00.117) 0:00:08.903 ***** 2026-02-05 00:57:50.589796 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589801 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.589805 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.589810 | orchestrator | 2026-02-05 00:57:50.589814 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:57:50.589819 | orchestrator | Thursday 05 February 2026 00:56:14 +0000 (0:00:00.245) 0:00:09.148 ***** 2026-02-05 00:57:50.589824 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:50.589828 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:50.589832 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:50.589835 | orchestrator | 2026-02-05 00:57:50.589839 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:57:50.589843 | orchestrator | Thursday 05 February 2026 00:56:15 +0000 (0:00:00.551) 0:00:09.700 ***** 2026-02-05 00:57:50.589847 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589850 | orchestrator | 2026-02-05 00:57:50.589854 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:57:50.589858 | orchestrator | Thursday 05 February 2026 00:56:15 +0000 (0:00:00.101) 0:00:09.801 ***** 2026-02-05 00:57:50.589861 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589865 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.589869 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.589874 | orchestrator | 2026-02-05 00:57:50.589880 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:57:50.589886 | orchestrator | Thursday 05 February 2026 00:56:15 +0000 (0:00:00.267) 0:00:10.069 ***** 2026-02-05 00:57:50.589892 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:50.589897 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:50.589903 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:50.589908 | orchestrator | 2026-02-05 00:57:50.589915 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:57:50.589921 | orchestrator | Thursday 05 February 2026 00:56:16 +0000 (0:00:00.275) 0:00:10.344 ***** 2026-02-05 00:57:50.589927 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589932 | orchestrator | 2026-02-05 00:57:50.589940 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:57:50.589946 | orchestrator | Thursday 05 February 2026 00:56:16 +0000 (0:00:00.116) 0:00:10.461 ***** 2026-02-05 00:57:50.589953 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.589960 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.589967 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.589971 | orchestrator | 2026-02-05 00:57:50.589974 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-05 00:57:50.589978 | orchestrator | Thursday 05 February 2026 00:56:16 +0000 (0:00:00.380) 0:00:10.841 ***** 2026-02-05 00:57:50.589982 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:57:50.589986 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:57:50.589990 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:50.589993 | orchestrator | 2026-02-05 00:57:50.589997 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-05 00:57:50.590001 | orchestrator | Thursday 05 February 2026 00:56:18 +0000 (0:00:01.753) 0:00:12.595 ***** 2026-02-05 00:57:50.590008 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-05 00:57:50.590012 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-05 00:57:50.590054 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-05 00:57:50.590058 | orchestrator | 2026-02-05 00:57:50.590064 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-05 00:57:50.590068 | orchestrator | Thursday 05 February 2026 00:56:20 +0000 (0:00:01.869) 0:00:14.465 ***** 2026-02-05 00:57:50.590072 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-05 00:57:50.590076 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-05 00:57:50.590080 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-05 00:57:50.590084 | orchestrator | 2026-02-05 00:57:50.590093 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-05 00:57:50.590097 | orchestrator | Thursday 05 February 2026 00:56:22 +0000 (0:00:02.095) 0:00:16.560 ***** 2026-02-05 00:57:50.590100 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-05 00:57:50.590104 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-05 00:57:50.590108 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-05 00:57:50.590111 | orchestrator | 2026-02-05 00:57:50.590115 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-05 00:57:50.590119 | orchestrator | Thursday 05 February 2026 00:56:23 +0000 (0:00:01.597) 0:00:18.157 ***** 2026-02-05 00:57:50.590123 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.590126 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.590130 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.590134 | orchestrator | 2026-02-05 00:57:50.590137 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-05 00:57:50.590141 | orchestrator | Thursday 05 February 2026 00:56:24 +0000 (0:00:00.388) 0:00:18.546 ***** 2026-02-05 00:57:50.590145 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.590148 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.590152 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.590156 | orchestrator | 2026-02-05 00:57:50.590159 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 00:57:50.590163 | orchestrator | Thursday 05 February 2026 00:56:24 +0000 (0:00:00.254) 0:00:18.800 ***** 2026-02-05 00:57:50.590167 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:57:50.590171 | orchestrator | 2026-02-05 00:57:50.590174 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-05 00:57:50.590178 | orchestrator | Thursday 05 February 2026 00:56:25 +0000 (0:00:00.498) 0:00:19.298 ***** 2026-02-05 00:57:50.590187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:57:50.590201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:57:50.590209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:57:50.590217 | orchestrator | 2026-02-05 00:57:50.590221 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-05 00:57:50.590225 | orchestrator | Thursday 05 February 2026 00:56:26 +0000 (0:00:01.564) 0:00:20.862 ***** 2026-02-05 00:57:50.590232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:57:50.590240 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.590251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:57:50.590256 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.590260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:57:50.590267 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.590271 | orchestrator | 2026-02-05 00:57:50.590274 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-05 00:57:50.590278 | orchestrator | Thursday 05 February 2026 00:56:27 +0000 (0:00:00.587) 0:00:21.450 ***** 2026-02-05 00:57:50.590288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:57:50.590292 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.590296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:57:50.590304 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.590314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:57:50.590318 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.590322 | orchestrator | 2026-02-05 00:57:50.590326 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-05 00:57:50.590329 | orchestrator | Thursday 05 February 2026 00:56:28 +0000 (0:00:00.841) 0:00:22.291 ***** 2026-02-05 00:57:50.590333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:57:50.590346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:57:50.590358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:57:50.590362 | orchestrator | 2026-02-05 00:57:50.590366 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 00:57:50.590370 | orchestrator | Thursday 05 February 2026 00:56:29 +0000 (0:00:01.383) 0:00:23.675 ***** 2026-02-05 00:57:50.590374 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:50.590377 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:50.590381 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:50.590385 | orchestrator | 2026-02-05 00:57:50.590389 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 00:57:50.590395 | orchestrator | Thursday 05 February 2026 00:56:29 +0000 (0:00:00.297) 0:00:23.972 ***** 2026-02-05 00:57:50.590399 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:57:50.590403 | orchestrator | 2026-02-05 00:57:50.590407 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-05 00:57:50.590410 | orchestrator | Thursday 05 February 2026 00:56:30 +0000 (0:00:00.561) 0:00:24.533 ***** 2026-02-05 00:57:50.590414 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:50.590418 | orchestrator | 2026-02-05 00:57:50.590422 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-05 00:57:50.590425 | orchestrator | Thursday 05 February 2026 00:56:32 +0000 (0:00:02.492) 0:00:27.026 ***** 2026-02-05 00:57:50.590429 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:50.590433 | orchestrator | 2026-02-05 00:57:50.590436 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-05 00:57:50.590440 | orchestrator | Thursday 05 February 2026 00:56:35 +0000 (0:00:02.239) 0:00:29.266 ***** 2026-02-05 00:57:50.590444 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:50.590448 | orchestrator | 2026-02-05 00:57:50.590451 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-05 00:57:50.590458 | orchestrator | Thursday 05 February 2026 00:56:50 +0000 (0:00:15.802) 0:00:45.068 ***** 2026-02-05 00:57:50.590462 | orchestrator | 2026-02-05 00:57:50.590466 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-05 00:57:50.590469 | orchestrator | Thursday 05 February 2026 00:56:50 +0000 (0:00:00.063) 0:00:45.132 ***** 2026-02-05 00:57:50.590473 | orchestrator | 2026-02-05 00:57:50.590477 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-05 00:57:50.590481 | orchestrator | Thursday 05 February 2026 00:56:50 +0000 (0:00:00.062) 0:00:45.194 ***** 2026-02-05 00:57:50.590484 | orchestrator | 2026-02-05 00:57:50.590488 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-05 00:57:50.590492 | orchestrator | Thursday 05 February 2026 00:56:51 +0000 (0:00:00.068) 0:00:45.262 ***** 2026-02-05 00:57:50.590495 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:50.590499 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:57:50.590503 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:57:50.590506 | orchestrator | 2026-02-05 00:57:50.590510 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:57:50.590514 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-05 00:57:50.590518 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-05 00:57:50.590522 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-05 00:57:50.590526 | orchestrator | 2026-02-05 00:57:50.590529 | orchestrator | 2026-02-05 00:57:50.590533 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:57:50.590537 | orchestrator | Thursday 05 February 2026 00:57:49 +0000 (0:00:58.906) 0:01:44.169 ***** 2026-02-05 00:57:50.590541 | orchestrator | =============================================================================== 2026-02-05 00:57:50.590544 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.91s 2026-02-05 00:57:50.590548 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.80s 2026-02-05 00:57:50.590552 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.49s 2026-02-05 00:57:50.590555 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.24s 2026-02-05 00:57:50.590559 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.10s 2026-02-05 00:57:50.590563 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.87s 2026-02-05 00:57:50.590566 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.75s 2026-02-05 00:57:50.590570 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.60s 2026-02-05 00:57:50.590574 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.56s 2026-02-05 00:57:50.590577 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.38s 2026-02-05 00:57:50.590581 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.98s 2026-02-05 00:57:50.590585 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.84s 2026-02-05 00:57:50.590589 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2026-02-05 00:57:50.590592 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.59s 2026-02-05 00:57:50.590596 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2026-02-05 00:57:50.590600 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2026-02-05 00:57:50.590606 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2026-02-05 00:57:50.590610 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2026-02-05 00:57:50.590620 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.46s 2026-02-05 00:57:50.590624 | orchestrator | horizon : Update policy file name --------------------------------------- 0.41s 2026-02-05 00:57:50.590627 | orchestrator | 2026-02-05 00:57:50 | INFO  | Task 4f5cc073-0eb9-4c13-ad8b-21bdf2fd307a is in state STARTED 2026-02-05 00:57:50.590977 | orchestrator | 2026-02-05 00:57:50 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:50.591041 | orchestrator | 2026-02-05 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:53.636273 | orchestrator | 2026-02-05 00:57:53 | INFO  | Task 4f5cc073-0eb9-4c13-ad8b-21bdf2fd307a is in state STARTED 2026-02-05 00:57:53.638397 | orchestrator | 2026-02-05 00:57:53 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:53.638455 | orchestrator | 2026-02-05 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:56.678554 | orchestrator | 2026-02-05 00:57:56 | INFO  | Task 4f5cc073-0eb9-4c13-ad8b-21bdf2fd307a is in state STARTED 2026-02-05 00:57:56.681198 | orchestrator | 2026-02-05 00:57:56 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:56.681313 | orchestrator | 2026-02-05 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:59.728603 | orchestrator | 2026-02-05 00:57:59 | INFO  | Task 4f5cc073-0eb9-4c13-ad8b-21bdf2fd307a is in state STARTED 2026-02-05 00:57:59.731116 | orchestrator | 2026-02-05 00:57:59 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:57:59.731361 | orchestrator | 2026-02-05 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:02.775055 | orchestrator | 2026-02-05 00:58:02 | INFO  | Task 4f5cc073-0eb9-4c13-ad8b-21bdf2fd307a is in state STARTED 2026-02-05 00:58:02.776330 | orchestrator | 2026-02-05 00:58:02 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:02.776378 | orchestrator | 2026-02-05 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:05.820240 | orchestrator | 2026-02-05 00:58:05 | INFO  | Task 4f5cc073-0eb9-4c13-ad8b-21bdf2fd307a is in state STARTED 2026-02-05 00:58:05.820313 | orchestrator | 2026-02-05 00:58:05 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:05.820320 | orchestrator | 2026-02-05 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:08.869147 | orchestrator | 2026-02-05 00:58:08 | INFO  | Task 4f5cc073-0eb9-4c13-ad8b-21bdf2fd307a is in state STARTED 2026-02-05 00:58:08.870913 | orchestrator | 2026-02-05 00:58:08 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:08.870986 | orchestrator | 2026-02-05 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:11.911904 | orchestrator | 2026-02-05 00:58:11 | INFO  | Task 4f5cc073-0eb9-4c13-ad8b-21bdf2fd307a is in state STARTED 2026-02-05 00:58:11.913370 | orchestrator | 2026-02-05 00:58:11 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:11.913429 | orchestrator | 2026-02-05 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:14.958088 | orchestrator | 2026-02-05 00:58:14 | INFO  | Task 4f5cc073-0eb9-4c13-ad8b-21bdf2fd307a is in state STARTED 2026-02-05 00:58:14.960218 | orchestrator | 2026-02-05 00:58:14 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:14.960296 | orchestrator | 2026-02-05 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:17.994795 | orchestrator | 2026-02-05 00:58:17 | INFO  | Task 4f5cc073-0eb9-4c13-ad8b-21bdf2fd307a is in state STARTED 2026-02-05 00:58:18.000577 | orchestrator | 2026-02-05 00:58:17 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:18.001005 | orchestrator | 2026-02-05 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:21.054591 | orchestrator | 2026-02-05 00:58:21 | INFO  | Task 4f5cc073-0eb9-4c13-ad8b-21bdf2fd307a is in state SUCCESS 2026-02-05 00:58:21.056296 | orchestrator | 2026-02-05 00:58:21 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:21.058794 | orchestrator | 2026-02-05 00:58:21 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:58:21.059105 | orchestrator | 2026-02-05 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:24.112284 | orchestrator | 2026-02-05 00:58:24 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:24.115570 | orchestrator | 2026-02-05 00:58:24 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:58:24.115700 | orchestrator | 2026-02-05 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:27.170749 | orchestrator | 2026-02-05 00:58:27 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:27.172632 | orchestrator | 2026-02-05 00:58:27 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:58:27.172713 | orchestrator | 2026-02-05 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:30.224773 | orchestrator | 2026-02-05 00:58:30 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:30.226702 | orchestrator | 2026-02-05 00:58:30 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:58:30.226824 | orchestrator | 2026-02-05 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:33.267857 | orchestrator | 2026-02-05 00:58:33 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:33.270719 | orchestrator | 2026-02-05 00:58:33 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:58:33.270779 | orchestrator | 2026-02-05 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:36.327277 | orchestrator | 2026-02-05 00:58:36 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:36.330389 | orchestrator | 2026-02-05 00:58:36 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:58:36.330455 | orchestrator | 2026-02-05 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:39.373357 | orchestrator | 2026-02-05 00:58:39 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:39.375461 | orchestrator | 2026-02-05 00:58:39 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:58:39.376252 | orchestrator | 2026-02-05 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:42.418159 | orchestrator | 2026-02-05 00:58:42 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:42.419626 | orchestrator | 2026-02-05 00:58:42 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:58:42.420166 | orchestrator | 2026-02-05 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:45.466884 | orchestrator | 2026-02-05 00:58:45 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:45.467672 | orchestrator | 2026-02-05 00:58:45 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:58:45.467892 | orchestrator | 2026-02-05 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:48.504120 | orchestrator | 2026-02-05 00:58:48 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:48.505376 | orchestrator | 2026-02-05 00:58:48 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:58:48.505442 | orchestrator | 2026-02-05 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:51.545068 | orchestrator | 2026-02-05 00:58:51 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state STARTED 2026-02-05 00:58:51.546600 | orchestrator | 2026-02-05 00:58:51 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:58:51.546663 | orchestrator | 2026-02-05 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:54.590859 | orchestrator | 2026-02-05 00:58:54.590934 | orchestrator | 2026-02-05 00:58:54.590941 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-05 00:58:54.590985 | orchestrator | 2026-02-05 00:58:54.590992 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-05 00:58:54.590997 | orchestrator | Thursday 05 February 2026 00:57:45 +0000 (0:00:00.155) 0:00:00.155 ***** 2026-02-05 00:58:54.591001 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-05 00:58:54.591007 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 00:58:54.591011 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 00:58:54.591015 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 00:58:54.591019 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 00:58:54.591024 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-05 00:58:54.591028 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-05 00:58:54.591032 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-05 00:58:54.591036 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-05 00:58:54.591042 | orchestrator | 2026-02-05 00:58:54.591048 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-05 00:58:54.591054 | orchestrator | Thursday 05 February 2026 00:57:50 +0000 (0:00:04.789) 0:00:04.945 ***** 2026-02-05 00:58:54.591062 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-05 00:58:54.591068 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 00:58:54.591073 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 00:58:54.591181 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 00:58:54.591189 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 00:58:54.591235 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-05 00:58:54.591242 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-05 00:58:54.591249 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-05 00:58:54.591256 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-05 00:58:54.591282 | orchestrator | 2026-02-05 00:58:54.591288 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-05 00:58:54.591295 | orchestrator | Thursday 05 February 2026 00:57:55 +0000 (0:00:04.430) 0:00:09.376 ***** 2026-02-05 00:58:54.591302 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 00:58:54.591308 | orchestrator | 2026-02-05 00:58:54.591311 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-05 00:58:54.591315 | orchestrator | Thursday 05 February 2026 00:57:56 +0000 (0:00:00.979) 0:00:10.355 ***** 2026-02-05 00:58:54.591319 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-05 00:58:54.591324 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-05 00:58:54.591328 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-05 00:58:54.591332 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 00:58:54.591336 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-05 00:58:54.591339 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-05 00:58:54.591343 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-05 00:58:54.591347 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-05 00:58:54.591351 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-05 00:58:54.591355 | orchestrator | 2026-02-05 00:58:54.591359 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-05 00:58:54.591362 | orchestrator | Thursday 05 February 2026 00:58:09 +0000 (0:00:12.985) 0:00:23.341 ***** 2026-02-05 00:58:54.591366 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-05 00:58:54.591852 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-05 00:58:54.591887 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-05 00:58:54.591892 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-05 00:58:54.591925 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-05 00:58:54.591931 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-05 00:58:54.591935 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-05 00:58:54.591939 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-05 00:58:54.591943 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-05 00:58:54.591947 | orchestrator | 2026-02-05 00:58:54.591951 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-05 00:58:54.591956 | orchestrator | Thursday 05 February 2026 00:58:12 +0000 (0:00:02.999) 0:00:26.340 ***** 2026-02-05 00:58:54.591960 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-05 00:58:54.591967 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-05 00:58:54.591971 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-05 00:58:54.591974 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 00:58:54.591978 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-05 00:58:54.591982 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-05 00:58:54.591993 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-05 00:58:54.591997 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-05 00:58:54.592001 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-05 00:58:54.592005 | orchestrator | 2026-02-05 00:58:54.592008 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:58:54.592012 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:58:54.592018 | orchestrator | 2026-02-05 00:58:54.592021 | orchestrator | 2026-02-05 00:58:54.592025 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:58:54.592029 | orchestrator | Thursday 05 February 2026 00:58:18 +0000 (0:00:06.200) 0:00:32.541 ***** 2026-02-05 00:58:54.592033 | orchestrator | =============================================================================== 2026-02-05 00:58:54.592036 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.99s 2026-02-05 00:58:54.592040 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.20s 2026-02-05 00:58:54.592044 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.79s 2026-02-05 00:58:54.592048 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.43s 2026-02-05 00:58:54.592051 | orchestrator | Check if target directories exist --------------------------------------- 3.00s 2026-02-05 00:58:54.592055 | orchestrator | Create share directory -------------------------------------------------- 0.98s 2026-02-05 00:58:54.592059 | orchestrator | 2026-02-05 00:58:54.592062 | orchestrator | 2026-02-05 00:58:54.592066 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:58:54.592070 | orchestrator | 2026-02-05 00:58:54.592073 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:58:54.592077 | orchestrator | Thursday 05 February 2026 00:56:06 +0000 (0:00:00.226) 0:00:00.226 ***** 2026-02-05 00:58:54.592081 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:54.592085 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:54.592089 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:54.592093 | orchestrator | 2026-02-05 00:58:54.592098 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:58:54.592104 | orchestrator | Thursday 05 February 2026 00:56:06 +0000 (0:00:00.273) 0:00:00.499 ***** 2026-02-05 00:58:54.592110 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-05 00:58:54.592116 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-05 00:58:54.592122 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-05 00:58:54.592132 | orchestrator | 2026-02-05 00:58:54.592138 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-05 00:58:54.592145 | orchestrator | 2026-02-05 00:58:54.592151 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 00:58:54.592157 | orchestrator | Thursday 05 February 2026 00:56:06 +0000 (0:00:00.352) 0:00:00.852 ***** 2026-02-05 00:58:54.592162 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:58:54.592168 | orchestrator | 2026-02-05 00:58:54.592197 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-05 00:58:54.592203 | orchestrator | Thursday 05 February 2026 00:56:07 +0000 (0:00:00.502) 0:00:01.355 ***** 2026-02-05 00:58:54.592236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.592257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.592266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.592272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592330 | orchestrator | 2026-02-05 00:58:54.592334 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-05 00:58:54.592338 | orchestrator | Thursday 05 February 2026 00:56:08 +0000 (0:00:01.684) 0:00:03.039 ***** 2026-02-05 00:58:54.592342 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.592346 | orchestrator | 2026-02-05 00:58:54.592350 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-05 00:58:54.592353 | orchestrator | Thursday 05 February 2026 00:56:08 +0000 (0:00:00.128) 0:00:03.168 ***** 2026-02-05 00:58:54.592357 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.592361 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:54.592364 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:54.592368 | orchestrator | 2026-02-05 00:58:54.592372 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-05 00:58:54.592375 | orchestrator | Thursday 05 February 2026 00:56:09 +0000 (0:00:00.351) 0:00:03.520 ***** 2026-02-05 00:58:54.592379 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 00:58:54.592383 | orchestrator | 2026-02-05 00:58:54.592387 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 00:58:54.592390 | orchestrator | Thursday 05 February 2026 00:56:10 +0000 (0:00:00.718) 0:00:04.238 ***** 2026-02-05 00:58:54.592394 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:58:54.592398 | orchestrator | 2026-02-05 00:58:54.592402 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-05 00:58:54.592408 | orchestrator | Thursday 05 February 2026 00:56:10 +0000 (0:00:00.474) 0:00:04.712 ***** 2026-02-05 00:58:54.592425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.592434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.592439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.592444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592479 | orchestrator | 2026-02-05 00:58:54.592484 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-05 00:58:54.592488 | orchestrator | Thursday 05 February 2026 00:56:13 +0000 (0:00:03.083) 0:00:07.795 ***** 2026-02-05 00:58:54.592493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:58:54.592500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:58:54.592510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:58:54.592515 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.592522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:58:54.592527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:58:54.592532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:58:54.592539 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:54.592544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:58:54.592553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:58:54.592563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:58:54.592567 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:54.592572 | orchestrator | 2026-02-05 00:58:54.592576 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-05 00:58:54.592581 | orchestrator | Thursday 05 February 2026 00:56:14 +0000 (0:00:00.640) 0:00:08.435 ***** 2026-02-05 00:58:54.592586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:58:54.592590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:58:54.592598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:58:54.592602 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.592610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_ba2026-02-05 00:58:54 | INFO  | Task 40ce4598-11fb-4273-8389-2fb34189dca2 is in state SUCCESS 2026-02-05 00:58:54.592615 | orchestrator | ckend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:58:54.592621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:58:54.592625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:58:54.592629 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:54.592633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:58:54.592658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:58:54.592666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:58:54.592673 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:54.592680 | orchestrator | 2026-02-05 00:58:54.592687 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-05 00:58:54.592692 | orchestrator | Thursday 05 February 2026 00:56:14 +0000 (0:00:00.737) 0:00:09.173 ***** 2026-02-05 00:58:54.592698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.592703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.592710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.592714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592747 | orchestrator | 2026-02-05 00:58:54.592751 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-05 00:58:54.592755 | orchestrator | Thursday 05 February 2026 00:56:18 +0000 (0:00:03.203) 0:00:12.376 ***** 2026-02-05 00:58:54.592759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.592768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.592773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:58:54.592780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:58:54.592784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.592788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:58:54.592798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.592815 | orchestrator | 2026-02-05 00:58:54.592818 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-05 00:58:54.592822 | orchestrator | Thursday 05 February 2026 00:56:23 +0000 (0:00:05.050) 0:00:17.427 ***** 2026-02-05 00:58:54.592826 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:54.592830 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:58:54.592834 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:58:54.592838 | orchestrator | 2026-02-05 00:58:54.592841 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-05 00:58:54.592845 | orchestrator | Thursday 05 February 2026 00:56:24 +0000 (0:00:01.329) 0:00:18.756 ***** 2026-02-05 00:58:54.592849 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.592852 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:54.592856 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:54.592860 | orchestrator | 2026-02-05 00:58:54.592864 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-05 00:58:54.592867 | orchestrator | Thursday 05 February 2026 00:56:25 +0000 (0:00:00.458) 0:00:19.215 ***** 2026-02-05 00:58:54.592871 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.592875 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:54.592878 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:54.592882 | orchestrator | 2026-02-05 00:58:54.592886 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-05 00:58:54.592889 | orchestrator | Thursday 05 February 2026 00:56:25 +0000 (0:00:00.260) 0:00:19.476 ***** 2026-02-05 00:58:54.592893 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.592897 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:54.592900 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:54.592904 | orchestrator | 2026-02-05 00:58:54.592908 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-05 00:58:54.592912 | orchestrator | Thursday 05 February 2026 00:56:25 +0000 (0:00:00.379) 0:00:19.855 ***** 2026-02-05 00:58:54.592916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:58:54.592923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:58:54.592932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:58:54.592936 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.592940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:58:54.592944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:58:54.592948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:58:54.592952 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:54.592960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:58:54.592976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:58:54.592988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:58:54.592994 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:54.593000 | orchestrator | 2026-02-05 00:58:54.593006 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 00:58:54.593012 | orchestrator | Thursday 05 February 2026 00:56:26 +0000 (0:00:00.489) 0:00:20.345 ***** 2026-02-05 00:58:54.593018 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.593023 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:54.593029 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:54.593034 | orchestrator | 2026-02-05 00:58:54.593040 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-05 00:58:54.593045 | orchestrator | Thursday 05 February 2026 00:56:26 +0000 (0:00:00.260) 0:00:20.605 ***** 2026-02-05 00:58:54.593051 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-05 00:58:54.593057 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-05 00:58:54.593063 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-05 00:58:54.593070 | orchestrator | 2026-02-05 00:58:54.593076 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-05 00:58:54.593082 | orchestrator | Thursday 05 February 2026 00:56:27 +0000 (0:00:01.476) 0:00:22.082 ***** 2026-02-05 00:58:54.593088 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 00:58:54.593094 | orchestrator | 2026-02-05 00:58:54.593100 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-05 00:58:54.593107 | orchestrator | Thursday 05 February 2026 00:56:28 +0000 (0:00:00.943) 0:00:23.025 ***** 2026-02-05 00:58:54.593114 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.593120 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:54.593126 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:54.593133 | orchestrator | 2026-02-05 00:58:54.593138 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-05 00:58:54.593144 | orchestrator | Thursday 05 February 2026 00:56:29 +0000 (0:00:00.725) 0:00:23.750 ***** 2026-02-05 00:58:54.593151 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 00:58:54.593157 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 00:58:54.593163 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 00:58:54.593169 | orchestrator | 2026-02-05 00:58:54.593176 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-05 00:58:54.593187 | orchestrator | Thursday 05 February 2026 00:56:30 +0000 (0:00:01.172) 0:00:24.922 ***** 2026-02-05 00:58:54.593194 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:54.593200 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:54.593207 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:54.593213 | orchestrator | 2026-02-05 00:58:54.593218 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-05 00:58:54.593222 | orchestrator | Thursday 05 February 2026 00:56:31 +0000 (0:00:00.306) 0:00:25.228 ***** 2026-02-05 00:58:54.593226 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-05 00:58:54.593230 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-05 00:58:54.593233 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-05 00:58:54.593242 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-05 00:58:54.593246 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-05 00:58:54.593250 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-05 00:58:54.593254 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-05 00:58:54.593259 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-05 00:58:54.593266 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-05 00:58:54.593271 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-05 00:58:54.593281 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-05 00:58:54.593286 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-05 00:58:54.593292 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-05 00:58:54.593297 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-05 00:58:54.593303 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-05 00:58:54.593309 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 00:58:54.593314 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 00:58:54.593320 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 00:58:54.593325 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 00:58:54.593330 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 00:58:54.593336 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 00:58:54.593342 | orchestrator | 2026-02-05 00:58:54.593349 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-05 00:58:54.593354 | orchestrator | Thursday 05 February 2026 00:56:39 +0000 (0:00:08.718) 0:00:33.947 ***** 2026-02-05 00:58:54.593360 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 00:58:54.593365 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 00:58:54.593371 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 00:58:54.593376 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 00:58:54.593383 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 00:58:54.593394 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 00:58:54.593401 | orchestrator | 2026-02-05 00:58:54.593407 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-05 00:58:54.593413 | orchestrator | Thursday 05 February 2026 00:56:42 +0000 (0:00:03.120) 0:00:37.068 ***** 2026-02-05 00:58:54.593421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.593435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.593447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:58:54.593454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 00:58:54.593465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 00:58:54.593469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 00:58:54.593476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.593480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.593488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 00:58:54.593495 | orchestrator | 2026-02-05 00:58:54.593500 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 00:58:54.593506 | orchestrator | Thursday 05 February 2026 00:56:45 +0000 (0:00:02.467) 0:00:39.536 ***** 2026-02-05 00:58:54.593512 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.593518 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:54.593523 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:54.593529 | orchestrator | 2026-02-05 00:58:54.593534 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-05 00:58:54.593540 | orchestrator | Thursday 05 February 2026 00:56:45 +0000 (0:00:00.282) 0:00:39.818 ***** 2026-02-05 00:58:54.593550 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:54.593556 | orchestrator | 2026-02-05 00:58:54.593562 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-05 00:58:54.593569 | orchestrator | Thursday 05 February 2026 00:56:47 +0000 (0:00:02.324) 0:00:42.142 ***** 2026-02-05 00:58:54.593575 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:54.593580 | orchestrator | 2026-02-05 00:58:54.593586 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-05 00:58:54.593592 | orchestrator | Thursday 05 February 2026 00:56:50 +0000 (0:00:02.173) 0:00:44.316 ***** 2026-02-05 00:58:54.593599 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:54.593605 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:54.593612 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:54.593618 | orchestrator | 2026-02-05 00:58:54.593624 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-05 00:58:54.593631 | orchestrator | Thursday 05 February 2026 00:56:50 +0000 (0:00:00.838) 0:00:45.155 ***** 2026-02-05 00:58:54.593638 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:54.593669 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:54.593673 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:54.593676 | orchestrator | 2026-02-05 00:58:54.593680 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-05 00:58:54.593684 | orchestrator | Thursday 05 February 2026 00:56:51 +0000 (0:00:00.508) 0:00:45.663 ***** 2026-02-05 00:58:54.593688 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.593691 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:54.593695 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:54.593699 | orchestrator | 2026-02-05 00:58:54.593703 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-05 00:58:54.593706 | orchestrator | Thursday 05 February 2026 00:56:51 +0000 (0:00:00.401) 0:00:46.065 ***** 2026-02-05 00:58:54.593710 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:54.593714 | orchestrator | 2026-02-05 00:58:54.593718 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-05 00:58:54.593721 | orchestrator | Thursday 05 February 2026 00:57:07 +0000 (0:00:15.633) 0:01:01.698 ***** 2026-02-05 00:58:54.593725 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:54.593729 | orchestrator | 2026-02-05 00:58:54.593732 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-05 00:58:54.593736 | orchestrator | Thursday 05 February 2026 00:57:19 +0000 (0:00:11.922) 0:01:13.620 ***** 2026-02-05 00:58:54.593740 | orchestrator | 2026-02-05 00:58:54.593744 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-05 00:58:54.593747 | orchestrator | Thursday 05 February 2026 00:57:19 +0000 (0:00:00.064) 0:01:13.685 ***** 2026-02-05 00:58:54.593751 | orchestrator | 2026-02-05 00:58:54.593755 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-05 00:58:54.593758 | orchestrator | Thursday 05 February 2026 00:57:19 +0000 (0:00:00.061) 0:01:13.746 ***** 2026-02-05 00:58:54.593762 | orchestrator | 2026-02-05 00:58:54.593768 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-05 00:58:54.593774 | orchestrator | Thursday 05 February 2026 00:57:19 +0000 (0:00:00.069) 0:01:13.816 ***** 2026-02-05 00:58:54.593781 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:54.593790 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:58:54.593795 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:58:54.593801 | orchestrator | 2026-02-05 00:58:54.593807 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-05 00:58:54.593818 | orchestrator | Thursday 05 February 2026 00:57:36 +0000 (0:00:16.907) 0:01:30.724 ***** 2026-02-05 00:58:54.593824 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:54.593830 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:58:54.593836 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:58:54.593842 | orchestrator | 2026-02-05 00:58:54.593848 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-05 00:58:54.593860 | orchestrator | Thursday 05 February 2026 00:57:46 +0000 (0:00:09.937) 0:01:40.661 ***** 2026-02-05 00:58:54.593866 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:58:54.593871 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:54.593877 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:58:54.593883 | orchestrator | 2026-02-05 00:58:54.593889 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 00:58:54.593894 | orchestrator | Thursday 05 February 2026 00:57:58 +0000 (0:00:12.013) 0:01:52.674 ***** 2026-02-05 00:58:54.593900 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:58:54.593906 | orchestrator | 2026-02-05 00:58:54.593917 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-05 00:58:54.593921 | orchestrator | Thursday 05 February 2026 00:57:59 +0000 (0:00:00.551) 0:01:53.225 ***** 2026-02-05 00:58:54.593925 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:54.593928 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:54.593932 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:54.593936 | orchestrator | 2026-02-05 00:58:54.593940 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-05 00:58:54.593943 | orchestrator | Thursday 05 February 2026 00:57:59 +0000 (0:00:00.950) 0:01:54.176 ***** 2026-02-05 00:58:54.593947 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:54.593951 | orchestrator | 2026-02-05 00:58:54.593955 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-05 00:58:54.593958 | orchestrator | Thursday 05 February 2026 00:58:01 +0000 (0:00:01.717) 0:01:55.894 ***** 2026-02-05 00:58:54.593962 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-05 00:58:54.593966 | orchestrator | 2026-02-05 00:58:54.593970 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-05 00:58:54.593974 | orchestrator | Thursday 05 February 2026 00:58:15 +0000 (0:00:13.354) 0:02:09.248 ***** 2026-02-05 00:58:54.593977 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-05 00:58:54.593981 | orchestrator | 2026-02-05 00:58:54.593985 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-05 00:58:54.593989 | orchestrator | Thursday 05 February 2026 00:58:41 +0000 (0:00:26.283) 0:02:35.532 ***** 2026-02-05 00:58:54.593992 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-05 00:58:54.593996 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-05 00:58:54.594000 | orchestrator | 2026-02-05 00:58:54.594004 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-05 00:58:54.594008 | orchestrator | Thursday 05 February 2026 00:58:48 +0000 (0:00:07.448) 0:02:42.981 ***** 2026-02-05 00:58:54.594080 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.594094 | orchestrator | 2026-02-05 00:58:54.594101 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-05 00:58:54.594105 | orchestrator | Thursday 05 February 2026 00:58:48 +0000 (0:00:00.140) 0:02:43.122 ***** 2026-02-05 00:58:54.594109 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.594113 | orchestrator | 2026-02-05 00:58:54.594117 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-05 00:58:54.594121 | orchestrator | Thursday 05 February 2026 00:58:49 +0000 (0:00:00.103) 0:02:43.225 ***** 2026-02-05 00:58:54.594124 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.594128 | orchestrator | 2026-02-05 00:58:54.594132 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-05 00:58:54.594136 | orchestrator | Thursday 05 February 2026 00:58:49 +0000 (0:00:00.134) 0:02:43.359 ***** 2026-02-05 00:58:54.594139 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.594143 | orchestrator | 2026-02-05 00:58:54.594147 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-05 00:58:54.594156 | orchestrator | Thursday 05 February 2026 00:58:49 +0000 (0:00:00.488) 0:02:43.848 ***** 2026-02-05 00:58:54.594159 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:54.594163 | orchestrator | 2026-02-05 00:58:54.594167 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 00:58:54.594171 | orchestrator | Thursday 05 February 2026 00:58:53 +0000 (0:00:03.569) 0:02:47.417 ***** 2026-02-05 00:58:54.594175 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:54.594179 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:54.594182 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:54.594186 | orchestrator | 2026-02-05 00:58:54.594190 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:58:54.594194 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 00:58:54.594200 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 00:58:54.594204 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 00:58:54.594208 | orchestrator | 2026-02-05 00:58:54.594211 | orchestrator | 2026-02-05 00:58:54.594215 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:58:54.594219 | orchestrator | Thursday 05 February 2026 00:58:53 +0000 (0:00:00.435) 0:02:47.853 ***** 2026-02-05 00:58:54.594223 | orchestrator | =============================================================================== 2026-02-05 00:58:54.594232 | orchestrator | service-ks-register : keystone | Creating services --------------------- 26.28s 2026-02-05 00:58:54.594236 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 16.91s 2026-02-05 00:58:54.594240 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.63s 2026-02-05 00:58:54.594243 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.35s 2026-02-05 00:58:54.594247 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.01s 2026-02-05 00:58:54.594251 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.92s 2026-02-05 00:58:54.594254 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.94s 2026-02-05 00:58:54.594258 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.72s 2026-02-05 00:58:54.594262 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.45s 2026-02-05 00:58:54.594269 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.05s 2026-02-05 00:58:54.594273 | orchestrator | keystone : Creating default user role ----------------------------------- 3.57s 2026-02-05 00:58:54.594277 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.20s 2026-02-05 00:58:54.594280 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.12s 2026-02-05 00:58:54.594284 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.08s 2026-02-05 00:58:54.594288 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.47s 2026-02-05 00:58:54.594292 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.32s 2026-02-05 00:58:54.594295 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.17s 2026-02-05 00:58:54.594299 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.72s 2026-02-05 00:58:54.594305 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.68s 2026-02-05 00:58:54.594311 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.48s 2026-02-05 00:58:54.594320 | orchestrator | 2026-02-05 00:58:54 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:58:54.594333 | orchestrator | 2026-02-05 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:57.614603 | orchestrator | 2026-02-05 00:58:57 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:58:57.615540 | orchestrator | 2026-02-05 00:58:57 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:58:57.615586 | orchestrator | 2026-02-05 00:58:57 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:58:57.616519 | orchestrator | 2026-02-05 00:58:57 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:58:57.617331 | orchestrator | 2026-02-05 00:58:57 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:58:57.617374 | orchestrator | 2026-02-05 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:00.648932 | orchestrator | 2026-02-05 00:59:00 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:00.650726 | orchestrator | 2026-02-05 00:59:00 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:00.654763 | orchestrator | 2026-02-05 00:59:00 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:00.656723 | orchestrator | 2026-02-05 00:59:00 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:59:00.658541 | orchestrator | 2026-02-05 00:59:00 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:00.658588 | orchestrator | 2026-02-05 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:03.708627 | orchestrator | 2026-02-05 00:59:03 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:03.711355 | orchestrator | 2026-02-05 00:59:03 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:03.713863 | orchestrator | 2026-02-05 00:59:03 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:03.716451 | orchestrator | 2026-02-05 00:59:03 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:59:03.719125 | orchestrator | 2026-02-05 00:59:03 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:03.719274 | orchestrator | 2026-02-05 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:06.763794 | orchestrator | 2026-02-05 00:59:06 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:06.766077 | orchestrator | 2026-02-05 00:59:06 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:06.768677 | orchestrator | 2026-02-05 00:59:06 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:06.770426 | orchestrator | 2026-02-05 00:59:06 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:59:06.772031 | orchestrator | 2026-02-05 00:59:06 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:06.772070 | orchestrator | 2026-02-05 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:09.813384 | orchestrator | 2026-02-05 00:59:09 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:09.816448 | orchestrator | 2026-02-05 00:59:09 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:09.816504 | orchestrator | 2026-02-05 00:59:09 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:09.818468 | orchestrator | 2026-02-05 00:59:09 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:59:09.819876 | orchestrator | 2026-02-05 00:59:09 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:09.819918 | orchestrator | 2026-02-05 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:12.872318 | orchestrator | 2026-02-05 00:59:12 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:12.874449 | orchestrator | 2026-02-05 00:59:12 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:12.877321 | orchestrator | 2026-02-05 00:59:12 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:12.879755 | orchestrator | 2026-02-05 00:59:12 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:59:12.881692 | orchestrator | 2026-02-05 00:59:12 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:12.882320 | orchestrator | 2026-02-05 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:15.922950 | orchestrator | 2026-02-05 00:59:15 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:15.923541 | orchestrator | 2026-02-05 00:59:15 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:15.924694 | orchestrator | 2026-02-05 00:59:15 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:15.925990 | orchestrator | 2026-02-05 00:59:15 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state STARTED 2026-02-05 00:59:15.926505 | orchestrator | 2026-02-05 00:59:15 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:15.926610 | orchestrator | 2026-02-05 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:18.970625 | orchestrator | 2026-02-05 00:59:18 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:18.972552 | orchestrator | 2026-02-05 00:59:18 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:18.974416 | orchestrator | 2026-02-05 00:59:18 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:18.977423 | orchestrator | 2026-02-05 00:59:18 | INFO  | Task 2514acf7-4173-458e-be45-9a922640b918 is in state SUCCESS 2026-02-05 00:59:18.978991 | orchestrator | 2026-02-05 00:59:18 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:18.980414 | orchestrator | 2026-02-05 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:22.024590 | orchestrator | 2026-02-05 00:59:22 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:22.026608 | orchestrator | 2026-02-05 00:59:22 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:22.028470 | orchestrator | 2026-02-05 00:59:22 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 00:59:22.030177 | orchestrator | 2026-02-05 00:59:22 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:22.031909 | orchestrator | 2026-02-05 00:59:22 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:22.031951 | orchestrator | 2026-02-05 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:25.070555 | orchestrator | 2026-02-05 00:59:25 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:25.071488 | orchestrator | 2026-02-05 00:59:25 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:25.072613 | orchestrator | 2026-02-05 00:59:25 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 00:59:25.074597 | orchestrator | 2026-02-05 00:59:25 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:25.075197 | orchestrator | 2026-02-05 00:59:25 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:25.075224 | orchestrator | 2026-02-05 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:28.116842 | orchestrator | 2026-02-05 00:59:28 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:28.117869 | orchestrator | 2026-02-05 00:59:28 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:28.119800 | orchestrator | 2026-02-05 00:59:28 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 00:59:28.121507 | orchestrator | 2026-02-05 00:59:28 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:28.122999 | orchestrator | 2026-02-05 00:59:28 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:28.123036 | orchestrator | 2026-02-05 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:31.166772 | orchestrator | 2026-02-05 00:59:31 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:31.170845 | orchestrator | 2026-02-05 00:59:31 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:31.173449 | orchestrator | 2026-02-05 00:59:31 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 00:59:31.176270 | orchestrator | 2026-02-05 00:59:31 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:31.178962 | orchestrator | 2026-02-05 00:59:31 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:31.179012 | orchestrator | 2026-02-05 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:34.218749 | orchestrator | 2026-02-05 00:59:34 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:34.219027 | orchestrator | 2026-02-05 00:59:34 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:34.220217 | orchestrator | 2026-02-05 00:59:34 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 00:59:34.220804 | orchestrator | 2026-02-05 00:59:34 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:34.222519 | orchestrator | 2026-02-05 00:59:34 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:34.222559 | orchestrator | 2026-02-05 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:37.255498 | orchestrator | 2026-02-05 00:59:37 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:37.256356 | orchestrator | 2026-02-05 00:59:37 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:37.257255 | orchestrator | 2026-02-05 00:59:37 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 00:59:37.258694 | orchestrator | 2026-02-05 00:59:37 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:37.259697 | orchestrator | 2026-02-05 00:59:37 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:37.259725 | orchestrator | 2026-02-05 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:40.303915 | orchestrator | 2026-02-05 00:59:40 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:40.304516 | orchestrator | 2026-02-05 00:59:40 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:40.305562 | orchestrator | 2026-02-05 00:59:40 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 00:59:40.307186 | orchestrator | 2026-02-05 00:59:40 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:40.308702 | orchestrator | 2026-02-05 00:59:40 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:40.309071 | orchestrator | 2026-02-05 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:43.341551 | orchestrator | 2026-02-05 00:59:43 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:43.343894 | orchestrator | 2026-02-05 00:59:43 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:43.344481 | orchestrator | 2026-02-05 00:59:43 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 00:59:43.345426 | orchestrator | 2026-02-05 00:59:43 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:43.346262 | orchestrator | 2026-02-05 00:59:43 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:43.346328 | orchestrator | 2026-02-05 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:46.382377 | orchestrator | 2026-02-05 00:59:46 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:46.382904 | orchestrator | 2026-02-05 00:59:46 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:46.383425 | orchestrator | 2026-02-05 00:59:46 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 00:59:46.384473 | orchestrator | 2026-02-05 00:59:46 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:46.385719 | orchestrator | 2026-02-05 00:59:46 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:46.385769 | orchestrator | 2026-02-05 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:49.413554 | orchestrator | 2026-02-05 00:59:49 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:49.413908 | orchestrator | 2026-02-05 00:59:49 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:49.414715 | orchestrator | 2026-02-05 00:59:49 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 00:59:49.415392 | orchestrator | 2026-02-05 00:59:49 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:49.416339 | orchestrator | 2026-02-05 00:59:49 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:49.416379 | orchestrator | 2026-02-05 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:52.445331 | orchestrator | 2026-02-05 00:59:52 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:52.445462 | orchestrator | 2026-02-05 00:59:52 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:52.446308 | orchestrator | 2026-02-05 00:59:52 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 00:59:52.447032 | orchestrator | 2026-02-05 00:59:52 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:52.447651 | orchestrator | 2026-02-05 00:59:52 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:52.448118 | orchestrator | 2026-02-05 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:55.469905 | orchestrator | 2026-02-05 00:59:55 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:55.470290 | orchestrator | 2026-02-05 00:59:55 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:55.471144 | orchestrator | 2026-02-05 00:59:55 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 00:59:55.472193 | orchestrator | 2026-02-05 00:59:55 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:55.473002 | orchestrator | 2026-02-05 00:59:55 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:55.473111 | orchestrator | 2026-02-05 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:58.494199 | orchestrator | 2026-02-05 00:59:58 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 00:59:58.494391 | orchestrator | 2026-02-05 00:59:58 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 00:59:58.495145 | orchestrator | 2026-02-05 00:59:58 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 00:59:58.495824 | orchestrator | 2026-02-05 00:59:58 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 00:59:58.497069 | orchestrator | 2026-02-05 00:59:58 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 00:59:58.497110 | orchestrator | 2026-02-05 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:01.532215 | orchestrator | 2026-02-05 01:00:01 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:01.532322 | orchestrator | 2026-02-05 01:00:01 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:01.532890 | orchestrator | 2026-02-05 01:00:01 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 01:00:01.533436 | orchestrator | 2026-02-05 01:00:01 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:01.533909 | orchestrator | 2026-02-05 01:00:01 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:01.533985 | orchestrator | 2026-02-05 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:04.552516 | orchestrator | 2026-02-05 01:00:04 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:04.553134 | orchestrator | 2026-02-05 01:00:04 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:04.554351 | orchestrator | 2026-02-05 01:00:04 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 01:00:04.555249 | orchestrator | 2026-02-05 01:00:04 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:04.556164 | orchestrator | 2026-02-05 01:00:04 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:04.557264 | orchestrator | 2026-02-05 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:07.576501 | orchestrator | 2026-02-05 01:00:07 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:07.576581 | orchestrator | 2026-02-05 01:00:07 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:07.577141 | orchestrator | 2026-02-05 01:00:07 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 01:00:07.577746 | orchestrator | 2026-02-05 01:00:07 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:07.578320 | orchestrator | 2026-02-05 01:00:07 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:07.578343 | orchestrator | 2026-02-05 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:10.602164 | orchestrator | 2026-02-05 01:00:10 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:10.602856 | orchestrator | 2026-02-05 01:00:10 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:10.603851 | orchestrator | 2026-02-05 01:00:10 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 01:00:10.604854 | orchestrator | 2026-02-05 01:00:10 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:10.606063 | orchestrator | 2026-02-05 01:00:10 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:10.606090 | orchestrator | 2026-02-05 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:13.627074 | orchestrator | 2026-02-05 01:00:13 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:13.627233 | orchestrator | 2026-02-05 01:00:13 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:13.628034 | orchestrator | 2026-02-05 01:00:13 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 01:00:13.628654 | orchestrator | 2026-02-05 01:00:13 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:13.629699 | orchestrator | 2026-02-05 01:00:13 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:13.629736 | orchestrator | 2026-02-05 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:16.650845 | orchestrator | 2026-02-05 01:00:16 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:16.651376 | orchestrator | 2026-02-05 01:00:16 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:16.652507 | orchestrator | 2026-02-05 01:00:16 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 01:00:16.653661 | orchestrator | 2026-02-05 01:00:16 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:16.654432 | orchestrator | 2026-02-05 01:00:16 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:16.654468 | orchestrator | 2026-02-05 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:19.686690 | orchestrator | 2026-02-05 01:00:19 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:19.687363 | orchestrator | 2026-02-05 01:00:19 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:19.687814 | orchestrator | 2026-02-05 01:00:19 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 01:00:19.688527 | orchestrator | 2026-02-05 01:00:19 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:19.689310 | orchestrator | 2026-02-05 01:00:19 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:19.689365 | orchestrator | 2026-02-05 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:22.725908 | orchestrator | 2026-02-05 01:00:22 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:22.726226 | orchestrator | 2026-02-05 01:00:22 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:22.727916 | orchestrator | 2026-02-05 01:00:22 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 01:00:22.728554 | orchestrator | 2026-02-05 01:00:22 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:22.729427 | orchestrator | 2026-02-05 01:00:22 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:22.729461 | orchestrator | 2026-02-05 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:25.756156 | orchestrator | 2026-02-05 01:00:25 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:25.758793 | orchestrator | 2026-02-05 01:00:25 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:25.759640 | orchestrator | 2026-02-05 01:00:25 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 01:00:25.760630 | orchestrator | 2026-02-05 01:00:25 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:25.761314 | orchestrator | 2026-02-05 01:00:25 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:25.761429 | orchestrator | 2026-02-05 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:28.783295 | orchestrator | 2026-02-05 01:00:28 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:28.783460 | orchestrator | 2026-02-05 01:00:28 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:28.784274 | orchestrator | 2026-02-05 01:00:28 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 01:00:28.785199 | orchestrator | 2026-02-05 01:00:28 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:28.785805 | orchestrator | 2026-02-05 01:00:28 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:28.785825 | orchestrator | 2026-02-05 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:31.813691 | orchestrator | 2026-02-05 01:00:31 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:31.814247 | orchestrator | 2026-02-05 01:00:31 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:31.815554 | orchestrator | 2026-02-05 01:00:31 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 01:00:31.816577 | orchestrator | 2026-02-05 01:00:31 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:31.818148 | orchestrator | 2026-02-05 01:00:31 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:31.818201 | orchestrator | 2026-02-05 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:34.846330 | orchestrator | 2026-02-05 01:00:34 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:34.846579 | orchestrator | 2026-02-05 01:00:34 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:34.847262 | orchestrator | 2026-02-05 01:00:34 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 01:00:34.847904 | orchestrator | 2026-02-05 01:00:34 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:34.850942 | orchestrator | 2026-02-05 01:00:34 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:34.851032 | orchestrator | 2026-02-05 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:37.879429 | orchestrator | 2026-02-05 01:00:37 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:37.879813 | orchestrator | 2026-02-05 01:00:37 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:37.880791 | orchestrator | 2026-02-05 01:00:37 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state STARTED 2026-02-05 01:00:37.881546 | orchestrator | 2026-02-05 01:00:37 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:37.882371 | orchestrator | 2026-02-05 01:00:37 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:37.882433 | orchestrator | 2026-02-05 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:40.920187 | orchestrator | 2026-02-05 01:00:40 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:40.920325 | orchestrator | 2026-02-05 01:00:40 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:40.920883 | orchestrator | 2026-02-05 01:00:40 | INFO  | Task 4b90ed58-9970-4efa-b27d-4017dd576b52 is in state SUCCESS 2026-02-05 01:00:40.921106 | orchestrator | 2026-02-05 01:00:40.921122 | orchestrator | 2026-02-05 01:00:40.921127 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-05 01:00:40.921132 | orchestrator | 2026-02-05 01:00:40.921136 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-05 01:00:40.921141 | orchestrator | Thursday 05 February 2026 00:58:22 +0000 (0:00:00.216) 0:00:00.216 ***** 2026-02-05 01:00:40.921145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-05 01:00:40.921151 | orchestrator | 2026-02-05 01:00:40.921156 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-05 01:00:40.921160 | orchestrator | Thursday 05 February 2026 00:58:23 +0000 (0:00:00.247) 0:00:00.464 ***** 2026-02-05 01:00:40.921165 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-05 01:00:40.921169 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-05 01:00:40.921174 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-05 01:00:40.921178 | orchestrator | 2026-02-05 01:00:40.921182 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-05 01:00:40.921186 | orchestrator | Thursday 05 February 2026 00:58:24 +0000 (0:00:01.223) 0:00:01.688 ***** 2026-02-05 01:00:40.921190 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-05 01:00:40.921193 | orchestrator | 2026-02-05 01:00:40.921197 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-05 01:00:40.921201 | orchestrator | Thursday 05 February 2026 00:58:25 +0000 (0:00:01.427) 0:00:03.116 ***** 2026-02-05 01:00:40.921205 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921210 | orchestrator | 2026-02-05 01:00:40.921213 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-05 01:00:40.921235 | orchestrator | Thursday 05 February 2026 00:58:26 +0000 (0:00:00.903) 0:00:04.019 ***** 2026-02-05 01:00:40.921239 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921243 | orchestrator | 2026-02-05 01:00:40.921247 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-05 01:00:40.921251 | orchestrator | Thursday 05 February 2026 00:58:27 +0000 (0:00:00.905) 0:00:04.925 ***** 2026-02-05 01:00:40.921255 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-05 01:00:40.921259 | orchestrator | ok: [testbed-manager] 2026-02-05 01:00:40.921282 | orchestrator | 2026-02-05 01:00:40.921286 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-05 01:00:40.921290 | orchestrator | Thursday 05 February 2026 00:59:08 +0000 (0:00:41.132) 0:00:46.057 ***** 2026-02-05 01:00:40.921294 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-05 01:00:40.921298 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-05 01:00:40.921302 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-05 01:00:40.921306 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-05 01:00:40.921309 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-05 01:00:40.921314 | orchestrator | 2026-02-05 01:00:40.921321 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-05 01:00:40.921328 | orchestrator | Thursday 05 February 2026 00:59:12 +0000 (0:00:04.067) 0:00:50.124 ***** 2026-02-05 01:00:40.921334 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-05 01:00:40.921340 | orchestrator | 2026-02-05 01:00:40.921347 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-05 01:00:40.921353 | orchestrator | Thursday 05 February 2026 00:59:13 +0000 (0:00:00.447) 0:00:50.572 ***** 2026-02-05 01:00:40.921359 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:00:40.921365 | orchestrator | 2026-02-05 01:00:40.921371 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-05 01:00:40.921378 | orchestrator | Thursday 05 February 2026 00:59:13 +0000 (0:00:00.126) 0:00:50.699 ***** 2026-02-05 01:00:40.921382 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:00:40.921386 | orchestrator | 2026-02-05 01:00:40.921390 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-05 01:00:40.921394 | orchestrator | Thursday 05 February 2026 00:59:13 +0000 (0:00:00.477) 0:00:51.176 ***** 2026-02-05 01:00:40.921398 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921402 | orchestrator | 2026-02-05 01:00:40.921406 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-05 01:00:40.921410 | orchestrator | Thursday 05 February 2026 00:59:15 +0000 (0:00:01.387) 0:00:52.563 ***** 2026-02-05 01:00:40.921413 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921417 | orchestrator | 2026-02-05 01:00:40.921421 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-05 01:00:40.921425 | orchestrator | Thursday 05 February 2026 00:59:16 +0000 (0:00:00.782) 0:00:53.346 ***** 2026-02-05 01:00:40.921428 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921432 | orchestrator | 2026-02-05 01:00:40.921436 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-05 01:00:40.921440 | orchestrator | Thursday 05 February 2026 00:59:16 +0000 (0:00:00.584) 0:00:53.930 ***** 2026-02-05 01:00:40.921454 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-05 01:00:40.921459 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-05 01:00:40.921463 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-05 01:00:40.921466 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-05 01:00:40.921470 | orchestrator | 2026-02-05 01:00:40.921474 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:00:40.921478 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:00:40.921483 | orchestrator | 2026-02-05 01:00:40.921487 | orchestrator | 2026-02-05 01:00:40.921497 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:00:40.921501 | orchestrator | Thursday 05 February 2026 00:59:18 +0000 (0:00:01.475) 0:00:55.406 ***** 2026-02-05 01:00:40.921505 | orchestrator | =============================================================================== 2026-02-05 01:00:40.921509 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.13s 2026-02-05 01:00:40.921512 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.07s 2026-02-05 01:00:40.921521 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.48s 2026-02-05 01:00:40.921525 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.43s 2026-02-05 01:00:40.921529 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.39s 2026-02-05 01:00:40.921532 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2026-02-05 01:00:40.921536 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2026-02-05 01:00:40.921540 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.90s 2026-02-05 01:00:40.921544 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2026-02-05 01:00:40.921547 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2026-02-05 01:00:40.921551 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.48s 2026-02-05 01:00:40.921555 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2026-02-05 01:00:40.921559 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-02-05 01:00:40.921562 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-02-05 01:00:40.921566 | orchestrator | 2026-02-05 01:00:40.921570 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 01:00:40.921574 | orchestrator | 2.16.14 2026-02-05 01:00:40.921579 | orchestrator | 2026-02-05 01:00:40.921582 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-05 01:00:40.921586 | orchestrator | 2026-02-05 01:00:40.921590 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-05 01:00:40.921594 | orchestrator | Thursday 05 February 2026 00:59:22 +0000 (0:00:00.258) 0:00:00.258 ***** 2026-02-05 01:00:40.921598 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921601 | orchestrator | 2026-02-05 01:00:40.921652 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-05 01:00:40.921659 | orchestrator | Thursday 05 February 2026 00:59:24 +0000 (0:00:01.394) 0:00:01.652 ***** 2026-02-05 01:00:40.921665 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921671 | orchestrator | 2026-02-05 01:00:40.921677 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-05 01:00:40.921683 | orchestrator | Thursday 05 February 2026 00:59:25 +0000 (0:00:01.109) 0:00:02.762 ***** 2026-02-05 01:00:40.921688 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921695 | orchestrator | 2026-02-05 01:00:40.921701 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-05 01:00:40.921707 | orchestrator | Thursday 05 February 2026 00:59:26 +0000 (0:00:01.057) 0:00:03.819 ***** 2026-02-05 01:00:40.921713 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921720 | orchestrator | 2026-02-05 01:00:40.921725 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-05 01:00:40.921732 | orchestrator | Thursday 05 February 2026 00:59:27 +0000 (0:00:01.145) 0:00:04.964 ***** 2026-02-05 01:00:40.921739 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921745 | orchestrator | 2026-02-05 01:00:40.921752 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-05 01:00:40.921758 | orchestrator | Thursday 05 February 2026 00:59:28 +0000 (0:00:01.019) 0:00:05.983 ***** 2026-02-05 01:00:40.921765 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921771 | orchestrator | 2026-02-05 01:00:40.921778 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-05 01:00:40.921812 | orchestrator | Thursday 05 February 2026 00:59:29 +0000 (0:00:01.086) 0:00:07.070 ***** 2026-02-05 01:00:40.921817 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921822 | orchestrator | 2026-02-05 01:00:40.921826 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-05 01:00:40.921832 | orchestrator | Thursday 05 February 2026 00:59:31 +0000 (0:00:02.031) 0:00:09.101 ***** 2026-02-05 01:00:40.921845 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921852 | orchestrator | 2026-02-05 01:00:40.921858 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-05 01:00:40.921865 | orchestrator | Thursday 05 February 2026 00:59:32 +0000 (0:00:01.224) 0:00:10.326 ***** 2026-02-05 01:00:40.921871 | orchestrator | changed: [testbed-manager] 2026-02-05 01:00:40.921878 | orchestrator | 2026-02-05 01:00:40.921884 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-05 01:00:40.921909 | orchestrator | Thursday 05 February 2026 01:00:15 +0000 (0:00:43.040) 0:00:53.367 ***** 2026-02-05 01:00:40.921914 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:00:40.921919 | orchestrator | 2026-02-05 01:00:40.921926 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-05 01:00:40.921932 | orchestrator | 2026-02-05 01:00:40.921944 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-05 01:00:40.921951 | orchestrator | Thursday 05 February 2026 01:00:15 +0000 (0:00:00.129) 0:00:53.497 ***** 2026-02-05 01:00:40.921957 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:00:40.921964 | orchestrator | 2026-02-05 01:00:40.921970 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-05 01:00:40.921977 | orchestrator | 2026-02-05 01:00:40.921983 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-05 01:00:40.921990 | orchestrator | Thursday 05 February 2026 01:00:27 +0000 (0:00:11.418) 0:01:04.915 ***** 2026-02-05 01:00:40.921996 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:00:40.922003 | orchestrator | 2026-02-05 01:00:40.922052 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-05 01:00:40.922059 | orchestrator | 2026-02-05 01:00:40.922064 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-05 01:00:40.922068 | orchestrator | Thursday 05 February 2026 01:00:28 +0000 (0:00:01.247) 0:01:06.163 ***** 2026-02-05 01:00:40.922073 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:00:40.922078 | orchestrator | 2026-02-05 01:00:40.922082 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:00:40.922087 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:00:40.922091 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:00:40.922096 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:00:40.922100 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:00:40.922103 | orchestrator | 2026-02-05 01:00:40.922107 | orchestrator | 2026-02-05 01:00:40.922111 | orchestrator | 2026-02-05 01:00:40.922115 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:00:40.922118 | orchestrator | Thursday 05 February 2026 01:00:39 +0000 (0:00:11.194) 0:01:17.358 ***** 2026-02-05 01:00:40.922122 | orchestrator | =============================================================================== 2026-02-05 01:00:40.922126 | orchestrator | Create admin user ------------------------------------------------------ 43.04s 2026-02-05 01:00:40.922130 | orchestrator | Restart ceph manager service ------------------------------------------- 23.86s 2026-02-05 01:00:40.922134 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.03s 2026-02-05 01:00:40.922138 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.39s 2026-02-05 01:00:40.922141 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.22s 2026-02-05 01:00:40.922145 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.15s 2026-02-05 01:00:40.922149 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.11s 2026-02-05 01:00:40.922157 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.09s 2026-02-05 01:00:40.922161 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.06s 2026-02-05 01:00:40.922164 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.02s 2026-02-05 01:00:40.922168 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2026-02-05 01:00:40.922271 | orchestrator | 2026-02-05 01:00:40 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:40.922339 | orchestrator | 2026-02-05 01:00:40 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:40.922346 | orchestrator | 2026-02-05 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:43.951752 | orchestrator | 2026-02-05 01:00:43 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:43.954853 | orchestrator | 2026-02-05 01:00:43 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:43.955195 | orchestrator | 2026-02-05 01:00:43 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:43.955903 | orchestrator | 2026-02-05 01:00:43 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:43.955934 | orchestrator | 2026-02-05 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:46.977244 | orchestrator | 2026-02-05 01:00:46 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:46.977700 | orchestrator | 2026-02-05 01:00:46 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:46.978504 | orchestrator | 2026-02-05 01:00:46 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:46.979736 | orchestrator | 2026-02-05 01:00:46 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:46.979776 | orchestrator | 2026-02-05 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:50.050053 | orchestrator | 2026-02-05 01:00:50 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:50.050236 | orchestrator | 2026-02-05 01:00:50 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:50.050790 | orchestrator | 2026-02-05 01:00:50 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:50.052432 | orchestrator | 2026-02-05 01:00:50 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:50.052458 | orchestrator | 2026-02-05 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:53.073924 | orchestrator | 2026-02-05 01:00:53 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:53.075665 | orchestrator | 2026-02-05 01:00:53 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:53.075715 | orchestrator | 2026-02-05 01:00:53 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:53.075890 | orchestrator | 2026-02-05 01:00:53 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:53.075903 | orchestrator | 2026-02-05 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:56.103432 | orchestrator | 2026-02-05 01:00:56 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:56.105951 | orchestrator | 2026-02-05 01:00:56 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:56.107418 | orchestrator | 2026-02-05 01:00:56 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:56.109485 | orchestrator | 2026-02-05 01:00:56 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:56.109556 | orchestrator | 2026-02-05 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:59.136311 | orchestrator | 2026-02-05 01:00:59 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:00:59.137032 | orchestrator | 2026-02-05 01:00:59 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:00:59.138642 | orchestrator | 2026-02-05 01:00:59 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:00:59.139191 | orchestrator | 2026-02-05 01:00:59 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:00:59.139230 | orchestrator | 2026-02-05 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:02.163427 | orchestrator | 2026-02-05 01:01:02 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:01:02.163938 | orchestrator | 2026-02-05 01:01:02 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:02.164236 | orchestrator | 2026-02-05 01:01:02 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:02.164956 | orchestrator | 2026-02-05 01:01:02 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:02.164979 | orchestrator | 2026-02-05 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:05.191905 | orchestrator | 2026-02-05 01:01:05 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:01:05.194373 | orchestrator | 2026-02-05 01:01:05 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:05.196119 | orchestrator | 2026-02-05 01:01:05 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:05.198281 | orchestrator | 2026-02-05 01:01:05 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:05.198315 | orchestrator | 2026-02-05 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:08.236185 | orchestrator | 2026-02-05 01:01:08 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:01:08.237980 | orchestrator | 2026-02-05 01:01:08 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:08.239263 | orchestrator | 2026-02-05 01:01:08 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:08.240519 | orchestrator | 2026-02-05 01:01:08 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:08.240773 | orchestrator | 2026-02-05 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:11.281091 | orchestrator | 2026-02-05 01:01:11 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state STARTED 2026-02-05 01:01:11.282986 | orchestrator | 2026-02-05 01:01:11 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:11.284386 | orchestrator | 2026-02-05 01:01:11 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:11.286129 | orchestrator | 2026-02-05 01:01:11 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:11.286166 | orchestrator | 2026-02-05 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:14.315442 | orchestrator | 2026-02-05 01:01:14 | INFO  | Task 87002255-60ba-40cb-83ce-b3ee424e6c57 is in state SUCCESS 2026-02-05 01:01:14.316894 | orchestrator | 2026-02-05 01:01:14.316962 | orchestrator | 2026-02-05 01:01:14.316973 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:01:14.316981 | orchestrator | 2026-02-05 01:01:14.316988 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:01:14.316996 | orchestrator | Thursday 05 February 2026 00:58:57 +0000 (0:00:00.222) 0:00:00.222 ***** 2026-02-05 01:01:14.317003 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:01:14.317011 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:01:14.317017 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:01:14.317023 | orchestrator | 2026-02-05 01:01:14.317030 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:01:14.317037 | orchestrator | Thursday 05 February 2026 00:58:58 +0000 (0:00:00.284) 0:00:00.507 ***** 2026-02-05 01:01:14.317045 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-05 01:01:14.317053 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-05 01:01:14.317060 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-05 01:01:14.317067 | orchestrator | 2026-02-05 01:01:14.317073 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-05 01:01:14.317080 | orchestrator | 2026-02-05 01:01:14.317159 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-05 01:01:14.317170 | orchestrator | Thursday 05 February 2026 00:58:58 +0000 (0:00:00.327) 0:00:00.835 ***** 2026-02-05 01:01:14.317178 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:01:14.317187 | orchestrator | 2026-02-05 01:01:14.317194 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-05 01:01:14.317201 | orchestrator | Thursday 05 February 2026 00:58:58 +0000 (0:00:00.472) 0:00:01.307 ***** 2026-02-05 01:01:14.317209 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-05 01:01:14.317215 | orchestrator | 2026-02-05 01:01:14.317222 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-05 01:01:14.317230 | orchestrator | Thursday 05 February 2026 00:59:03 +0000 (0:00:04.112) 0:00:05.419 ***** 2026-02-05 01:01:14.317237 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-05 01:01:14.317245 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-05 01:01:14.317252 | orchestrator | 2026-02-05 01:01:14.317259 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-05 01:01:14.317266 | orchestrator | Thursday 05 February 2026 00:59:10 +0000 (0:00:07.300) 0:00:12.720 ***** 2026-02-05 01:01:14.317273 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating projects (5 retries left). 2026-02-05 01:01:14.317299 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:01:14.317307 | orchestrator | 2026-02-05 01:01:14.317314 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-05 01:01:14.317321 | orchestrator | Thursday 05 February 2026 00:59:27 +0000 (0:00:17.109) 0:00:29.829 ***** 2026-02-05 01:01:14.317327 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:01:14.317334 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-05 01:01:14.317341 | orchestrator | 2026-02-05 01:01:14.317348 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-05 01:01:14.317354 | orchestrator | Thursday 05 February 2026 00:59:31 +0000 (0:00:04.392) 0:00:34.222 ***** 2026-02-05 01:01:14.317361 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:01:14.317367 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-05 01:01:14.317374 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-05 01:01:14.317381 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-05 01:01:14.317408 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-05 01:01:14.317415 | orchestrator | 2026-02-05 01:01:14.317422 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-05 01:01:14.317428 | orchestrator | Thursday 05 February 2026 00:59:49 +0000 (0:00:17.506) 0:00:51.728 ***** 2026-02-05 01:01:14.317434 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-05 01:01:14.317441 | orchestrator | 2026-02-05 01:01:14.317447 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-05 01:01:14.317453 | orchestrator | Thursday 05 February 2026 00:59:53 +0000 (0:00:04.485) 0:00:56.213 ***** 2026-02-05 01:01:14.317482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.317511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.317518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.317527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.317543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.317555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.317570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.317580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.317587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.317655 | orchestrator | 2026-02-05 01:01:14.317663 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-05 01:01:14.317670 | orchestrator | Thursday 05 February 2026 00:59:55 +0000 (0:00:02.188) 0:00:58.401 ***** 2026-02-05 01:01:14.317677 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-05 01:01:14.317684 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-05 01:01:14.317691 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-05 01:01:14.317698 | orchestrator | 2026-02-05 01:01:14.317705 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-05 01:01:14.317718 | orchestrator | Thursday 05 February 2026 00:59:57 +0000 (0:00:01.129) 0:00:59.531 ***** 2026-02-05 01:01:14.317725 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:14.317733 | orchestrator | 2026-02-05 01:01:14.317740 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-05 01:01:14.317746 | orchestrator | Thursday 05 February 2026 00:59:57 +0000 (0:00:00.212) 0:00:59.744 ***** 2026-02-05 01:01:14.317753 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:14.317760 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:01:14.317767 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:01:14.317774 | orchestrator | 2026-02-05 01:01:14.317781 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-05 01:01:14.317787 | orchestrator | Thursday 05 February 2026 00:59:57 +0000 (0:00:00.597) 0:01:00.342 ***** 2026-02-05 01:01:14.317794 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:01:14.317802 | orchestrator | 2026-02-05 01:01:14.317809 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-05 01:01:14.317815 | orchestrator | Thursday 05 February 2026 00:59:58 +0000 (0:00:00.565) 0:01:00.908 ***** 2026-02-05 01:01:14.317829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.317845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.317852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.317867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.317874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.317886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.317897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.317904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.317912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.317924 | orchestrator | 2026-02-05 01:01:14.317932 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-05 01:01:14.317939 | orchestrator | Thursday 05 February 2026 01:00:02 +0000 (0:00:03.682) 0:01:04.590 ***** 2026-02-05 01:01:14.317946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:01:14.317953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.317964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.317971 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:14.317983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:01:14.317991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.318004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.318011 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:01:14.318069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:01:14.318078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.318093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.318100 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:01:14.318106 | orchestrator | 2026-02-05 01:01:14.318112 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-05 01:01:14.318119 | orchestrator | Thursday 05 February 2026 01:00:03 +0000 (0:00:01.600) 0:01:06.191 ***** 2026-02-05 01:01:14.318125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:01:14.318138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.318144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.318151 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:14.318157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:01:14.318173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.318184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.318197 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:01:14.318204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:01:14.318210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.318217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.318224 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:01:14.318229 | orchestrator | 2026-02-05 01:01:14.318235 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-05 01:01:14.318241 | orchestrator | Thursday 05 February 2026 01:00:04 +0000 (0:00:00.928) 0:01:07.120 ***** 2026-02-05 01:01:14.318253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.318727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.318834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.318841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.318848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.318864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.318882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.318892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.318896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.318901 | orchestrator | 2026-02-05 01:01:14.318906 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-05 01:01:14.318911 | orchestrator | Thursday 05 February 2026 01:00:08 +0000 (0:00:03.485) 0:01:10.606 ***** 2026-02-05 01:01:14.318916 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:14.318921 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:01:14.318925 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:01:14.318929 | orchestrator | 2026-02-05 01:01:14.318933 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-05 01:01:14.318937 | orchestrator | Thursday 05 February 2026 01:00:10 +0000 (0:00:02.567) 0:01:13.173 ***** 2026-02-05 01:01:14.318941 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:01:14.318945 | orchestrator | 2026-02-05 01:01:14.318949 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-05 01:01:14.318952 | orchestrator | Thursday 05 February 2026 01:00:13 +0000 (0:00:02.320) 0:01:15.494 ***** 2026-02-05 01:01:14.318956 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:14.318960 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:01:14.318964 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:01:14.318967 | orchestrator | 2026-02-05 01:01:14.318971 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-05 01:01:14.318975 | orchestrator | Thursday 05 February 2026 01:00:13 +0000 (0:00:00.466) 0:01:15.961 ***** 2026-02-05 01:01:14.318979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.318991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.318999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.319003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.319007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.319011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.319018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.319030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.319034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.319038 | orchestrator | 2026-02-05 01:01:14.319042 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-05 01:01:14.319046 | orchestrator | Thursday 05 February 2026 01:00:22 +0000 (0:00:09.218) 0:01:25.180 ***** 2026-02-05 01:01:14.319050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:01:14.319054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.319059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.319066 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:14.319076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:01:14.319080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.319084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.319088 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:01:14.319092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:01:14.319096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.319106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:14.319110 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:01:14.319115 | orchestrator | 2026-02-05 01:01:14.319121 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-05 01:01:14.319126 | orchestrator | Thursday 05 February 2026 01:00:23 +0000 (0:00:01.105) 0:01:26.286 ***** 2026-02-05 01:01:14.319136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.319142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.319150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:01:14.319156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.319170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.319182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.319188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.319195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.319201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:14.319208 | orchestrator | 2026-02-05 01:01:14.319214 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-05 01:01:14.319220 | orchestrator | Thursday 05 February 2026 01:00:26 +0000 (0:00:02.619) 0:01:28.905 ***** 2026-02-05 01:01:14.319232 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:14.319238 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:01:14.319244 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:01:14.319250 | orchestrator | 2026-02-05 01:01:14.319257 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-05 01:01:14.319264 | orchestrator | Thursday 05 February 2026 01:00:26 +0000 (0:00:00.210) 0:01:29.116 ***** 2026-02-05 01:01:14.319270 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:14.319277 | orchestrator | 2026-02-05 01:01:14.319281 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-05 01:01:14.319286 | orchestrator | Thursday 05 February 2026 01:00:29 +0000 (0:00:02.301) 0:01:31.417 ***** 2026-02-05 01:01:14.319290 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:14.319295 | orchestrator | 2026-02-05 01:01:14.319299 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-05 01:01:14.319305 | orchestrator | Thursday 05 February 2026 01:00:31 +0000 (0:00:02.645) 0:01:34.063 ***** 2026-02-05 01:01:14.319311 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:14.319317 | orchestrator | 2026-02-05 01:01:14.319324 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-05 01:01:14.319333 | orchestrator | Thursday 05 February 2026 01:00:44 +0000 (0:00:13.019) 0:01:47.083 ***** 2026-02-05 01:01:14.319341 | orchestrator | 2026-02-05 01:01:14.319347 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-05 01:01:14.319354 | orchestrator | Thursday 05 February 2026 01:00:44 +0000 (0:00:00.105) 0:01:47.188 ***** 2026-02-05 01:01:14.319360 | orchestrator | 2026-02-05 01:01:14.319370 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-05 01:01:14.319376 | orchestrator | Thursday 05 February 2026 01:00:44 +0000 (0:00:00.107) 0:01:47.296 ***** 2026-02-05 01:01:14.319382 | orchestrator | 2026-02-05 01:01:14.319388 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-05 01:01:14.319394 | orchestrator | Thursday 05 February 2026 01:00:44 +0000 (0:00:00.054) 0:01:47.351 ***** 2026-02-05 01:01:14.319400 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:14.319407 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:01:14.319413 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:01:14.319419 | orchestrator | 2026-02-05 01:01:14.319425 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-05 01:01:14.319432 | orchestrator | Thursday 05 February 2026 01:00:51 +0000 (0:00:06.434) 0:01:53.786 ***** 2026-02-05 01:01:14.319443 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:01:14.319450 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:14.319456 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:01:14.319462 | orchestrator | 2026-02-05 01:01:14.319468 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-05 01:01:14.319475 | orchestrator | Thursday 05 February 2026 01:01:01 +0000 (0:00:10.262) 0:02:04.048 ***** 2026-02-05 01:01:14.319481 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:14.319485 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:01:14.319490 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:01:14.319494 | orchestrator | 2026-02-05 01:01:14.319499 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:01:14.319504 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:01:14.319509 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:01:14.319514 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:01:14.319518 | orchestrator | 2026-02-05 01:01:14.319523 | orchestrator | 2026-02-05 01:01:14.319527 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:01:14.319539 | orchestrator | Thursday 05 February 2026 01:01:12 +0000 (0:00:11.186) 0:02:15.234 ***** 2026-02-05 01:01:14.319543 | orchestrator | =============================================================================== 2026-02-05 01:01:14.319548 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.51s 2026-02-05 01:01:14.319553 | orchestrator | service-ks-register : barbican | Creating projects --------------------- 17.11s 2026-02-05 01:01:14.319557 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.02s 2026-02-05 01:01:14.319562 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.19s 2026-02-05 01:01:14.319566 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.26s 2026-02-05 01:01:14.319569 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.22s 2026-02-05 01:01:14.319573 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.30s 2026-02-05 01:01:14.319577 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.43s 2026-02-05 01:01:14.319580 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.49s 2026-02-05 01:01:14.319584 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.39s 2026-02-05 01:01:14.319588 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.11s 2026-02-05 01:01:14.319611 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.68s 2026-02-05 01:01:14.319615 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.49s 2026-02-05 01:01:14.319619 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.65s 2026-02-05 01:01:14.319622 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.62s 2026-02-05 01:01:14.319626 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.57s 2026-02-05 01:01:14.319630 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.32s 2026-02-05 01:01:14.319634 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.30s 2026-02-05 01:01:14.319638 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.19s 2026-02-05 01:01:14.319641 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.60s 2026-02-05 01:01:14.319645 | orchestrator | 2026-02-05 01:01:14 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:14.319650 | orchestrator | 2026-02-05 01:01:14 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:14.319967 | orchestrator | 2026-02-05 01:01:14 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:14.321835 | orchestrator | 2026-02-05 01:01:14 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:14.321922 | orchestrator | 2026-02-05 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:17.360449 | orchestrator | 2026-02-05 01:01:17 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:17.362662 | orchestrator | 2026-02-05 01:01:17 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:17.363478 | orchestrator | 2026-02-05 01:01:17 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:17.365233 | orchestrator | 2026-02-05 01:01:17 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:17.365289 | orchestrator | 2026-02-05 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:20.417036 | orchestrator | 2026-02-05 01:01:20 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:20.418947 | orchestrator | 2026-02-05 01:01:20 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:20.419701 | orchestrator | 2026-02-05 01:01:20 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:20.421419 | orchestrator | 2026-02-05 01:01:20 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:20.421456 | orchestrator | 2026-02-05 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:23.450971 | orchestrator | 2026-02-05 01:01:23 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:23.451293 | orchestrator | 2026-02-05 01:01:23 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:23.452167 | orchestrator | 2026-02-05 01:01:23 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:23.452438 | orchestrator | 2026-02-05 01:01:23 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:23.452652 | orchestrator | 2026-02-05 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:26.488564 | orchestrator | 2026-02-05 01:01:26 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:26.490311 | orchestrator | 2026-02-05 01:01:26 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:26.493850 | orchestrator | 2026-02-05 01:01:26 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:26.496418 | orchestrator | 2026-02-05 01:01:26 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:26.496650 | orchestrator | 2026-02-05 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:29.536677 | orchestrator | 2026-02-05 01:01:29 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:29.538848 | orchestrator | 2026-02-05 01:01:29 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:29.540842 | orchestrator | 2026-02-05 01:01:29 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:29.542836 | orchestrator | 2026-02-05 01:01:29 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:29.543088 | orchestrator | 2026-02-05 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:32.591217 | orchestrator | 2026-02-05 01:01:32 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:32.593144 | orchestrator | 2026-02-05 01:01:32 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:32.594362 | orchestrator | 2026-02-05 01:01:32 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:32.595466 | orchestrator | 2026-02-05 01:01:32 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:32.595576 | orchestrator | 2026-02-05 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:35.636639 | orchestrator | 2026-02-05 01:01:35 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:35.639016 | orchestrator | 2026-02-05 01:01:35 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:35.639716 | orchestrator | 2026-02-05 01:01:35 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:35.641353 | orchestrator | 2026-02-05 01:01:35 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:35.641401 | orchestrator | 2026-02-05 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:38.687034 | orchestrator | 2026-02-05 01:01:38 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:38.688039 | orchestrator | 2026-02-05 01:01:38 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:38.689388 | orchestrator | 2026-02-05 01:01:38 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:38.690466 | orchestrator | 2026-02-05 01:01:38 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:38.690517 | orchestrator | 2026-02-05 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:41.733006 | orchestrator | 2026-02-05 01:01:41 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:41.734878 | orchestrator | 2026-02-05 01:01:41 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:41.737203 | orchestrator | 2026-02-05 01:01:41 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:41.740047 | orchestrator | 2026-02-05 01:01:41 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:41.740089 | orchestrator | 2026-02-05 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:44.825372 | orchestrator | 2026-02-05 01:01:44 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:44.825433 | orchestrator | 2026-02-05 01:01:44 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:44.825890 | orchestrator | 2026-02-05 01:01:44 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:44.826630 | orchestrator | 2026-02-05 01:01:44 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:44.826716 | orchestrator | 2026-02-05 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:47.864459 | orchestrator | 2026-02-05 01:01:47 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:47.865824 | orchestrator | 2026-02-05 01:01:47 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:47.868055 | orchestrator | 2026-02-05 01:01:47 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:47.868090 | orchestrator | 2026-02-05 01:01:47 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:47.868099 | orchestrator | 2026-02-05 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:50.890134 | orchestrator | 2026-02-05 01:01:50 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:50.892033 | orchestrator | 2026-02-05 01:01:50 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:50.894044 | orchestrator | 2026-02-05 01:01:50 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:50.895771 | orchestrator | 2026-02-05 01:01:50 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:50.896035 | orchestrator | 2026-02-05 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:53.927729 | orchestrator | 2026-02-05 01:01:53 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:53.930256 | orchestrator | 2026-02-05 01:01:53 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:53.932838 | orchestrator | 2026-02-05 01:01:53 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:53.934850 | orchestrator | 2026-02-05 01:01:53 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state STARTED 2026-02-05 01:01:53.935300 | orchestrator | 2026-02-05 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:56.965175 | orchestrator | 2026-02-05 01:01:56 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:01:56.965936 | orchestrator | 2026-02-05 01:01:56 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:01:56.968172 | orchestrator | 2026-02-05 01:01:56 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:01:56.969092 | orchestrator | 2026-02-05 01:01:56 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:01:56.971044 | orchestrator | 2026-02-05 01:01:56 | INFO  | Task 0141ce02-57b7-49a4-8625-fe7d7c66d921 is in state SUCCESS 2026-02-05 01:01:56.975093 | orchestrator | 2026-02-05 01:01:56.975164 | orchestrator | 2026-02-05 01:01:56.975171 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:01:56.975176 | orchestrator | 2026-02-05 01:01:56.975180 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:01:56.975185 | orchestrator | Thursday 05 February 2026 00:58:57 +0000 (0:00:00.199) 0:00:00.199 ***** 2026-02-05 01:01:56.975201 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:01:56.975206 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:01:56.975210 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:01:56.975214 | orchestrator | 2026-02-05 01:01:56.975218 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:01:56.975221 | orchestrator | Thursday 05 February 2026 00:58:58 +0000 (0:00:00.242) 0:00:00.441 ***** 2026-02-05 01:01:56.975226 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-05 01:01:56.975230 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-05 01:01:56.975233 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-05 01:01:56.975237 | orchestrator | 2026-02-05 01:01:56.975241 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-05 01:01:56.975245 | orchestrator | 2026-02-05 01:01:56.975248 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-05 01:01:56.975252 | orchestrator | Thursday 05 February 2026 00:58:58 +0000 (0:00:00.343) 0:00:00.785 ***** 2026-02-05 01:01:56.975256 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:01:56.975262 | orchestrator | 2026-02-05 01:01:56.975265 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-05 01:01:56.975269 | orchestrator | Thursday 05 February 2026 00:58:58 +0000 (0:00:00.400) 0:00:01.185 ***** 2026-02-05 01:01:56.975274 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-05 01:01:56.975277 | orchestrator | 2026-02-05 01:01:56.975281 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-05 01:01:56.975285 | orchestrator | Thursday 05 February 2026 00:59:02 +0000 (0:00:03.993) 0:00:05.179 ***** 2026-02-05 01:01:56.975288 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-05 01:01:56.975293 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-05 01:01:56.975297 | orchestrator | 2026-02-05 01:01:56.975300 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-05 01:01:56.975304 | orchestrator | Thursday 05 February 2026 00:59:10 +0000 (0:00:07.157) 0:00:12.336 ***** 2026-02-05 01:01:56.975308 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-05 01:01:56.975312 | orchestrator | 2026-02-05 01:01:56.975316 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-05 01:01:56.975319 | orchestrator | Thursday 05 February 2026 00:59:13 +0000 (0:00:03.738) 0:00:16.074 ***** 2026-02-05 01:01:56.975337 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:01:56.975341 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-05 01:01:56.975345 | orchestrator | 2026-02-05 01:01:56.975348 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-05 01:01:56.975352 | orchestrator | Thursday 05 February 2026 00:59:18 +0000 (0:00:04.211) 0:00:20.285 ***** 2026-02-05 01:01:56.975356 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:01:56.975360 | orchestrator | 2026-02-05 01:01:56.975364 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-05 01:01:56.975367 | orchestrator | Thursday 05 February 2026 00:59:21 +0000 (0:00:03.628) 0:00:23.914 ***** 2026-02-05 01:01:56.975371 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-05 01:01:56.975375 | orchestrator | 2026-02-05 01:01:56.975379 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-05 01:01:56.975382 | orchestrator | Thursday 05 February 2026 00:59:25 +0000 (0:00:04.232) 0:00:28.146 ***** 2026-02-05 01:01:56.975389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.975411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.975416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.975421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975510 | orchestrator | 2026-02-05 01:01:56.975514 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-05 01:01:56.975518 | orchestrator | Thursday 05 February 2026 00:59:29 +0000 (0:00:03.101) 0:00:31.247 ***** 2026-02-05 01:01:56.975522 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:56.975526 | orchestrator | 2026-02-05 01:01:56.975530 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-05 01:01:56.975533 | orchestrator | Thursday 05 February 2026 00:59:29 +0000 (0:00:00.117) 0:00:31.364 ***** 2026-02-05 01:01:56.975537 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:56.975541 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:01:56.975545 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:01:56.975548 | orchestrator | 2026-02-05 01:01:56.975552 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-05 01:01:56.975556 | orchestrator | Thursday 05 February 2026 00:59:29 +0000 (0:00:00.291) 0:00:31.655 ***** 2026-02-05 01:01:56.975560 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:01:56.975563 | orchestrator | 2026-02-05 01:01:56.975616 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-05 01:01:56.975621 | orchestrator | Thursday 05 February 2026 00:59:30 +0000 (0:00:00.737) 0:00:32.393 ***** 2026-02-05 01:01:56.975629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.975637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.975754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.975762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.975881 | orchestrator | 2026-02-05 01:01:56.975885 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-05 01:01:56.975889 | orchestrator | Thursday 05 February 2026 00:59:36 +0000 (0:00:06.564) 0:00:38.957 ***** 2026-02-05 01:01:56.975893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.975902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:01:56.975912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.975918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.975925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.975931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.975937 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:56.975943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.976111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:01:56.976123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976139 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:01:56.976143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.976160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:01:56.976164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976180 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:01:56.976217 | orchestrator | 2026-02-05 01:01:56.976221 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-05 01:01:56.976225 | orchestrator | Thursday 05 February 2026 00:59:37 +0000 (0:00:01.068) 0:00:40.026 ***** 2026-02-05 01:01:56.976229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.976242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:01:56.976246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976262 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:56.976266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.976278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:01:56.976283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976298 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:01:56.976306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.976317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:01:56.976321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.976437 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:01:56.976449 | orchestrator | 2026-02-05 01:01:56.976455 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-05 01:01:56.976461 | orchestrator | Thursday 05 February 2026 00:59:39 +0000 (0:00:01.418) 0:00:41.445 ***** 2026-02-05 01:01:56.976468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.976484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.976491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.976496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976600 | orchestrator | 2026-02-05 01:01:56.976604 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-05 01:01:56.976608 | orchestrator | Thursday 05 February 2026 00:59:45 +0000 (0:00:06.425) 0:00:47.870 ***** 2026-02-05 01:01:56.976614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.976884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.976899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.976904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.976991 | orchestrator | 2026-02-05 01:01:56.976995 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-05 01:01:56.977000 | orchestrator | Thursday 05 February 2026 01:00:02 +0000 (0:00:17.351) 0:01:05.222 ***** 2026-02-05 01:01:56.977006 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-05 01:01:56.977013 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-05 01:01:56.977019 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-05 01:01:56.977025 | orchestrator | 2026-02-05 01:01:56.977031 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-05 01:01:56.977036 | orchestrator | Thursday 05 February 2026 01:00:07 +0000 (0:00:04.502) 0:01:09.724 ***** 2026-02-05 01:01:56.977042 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-05 01:01:56.977048 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-05 01:01:56.977054 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-05 01:01:56.977060 | orchestrator | 2026-02-05 01:01:56.977066 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-05 01:01:56.977072 | orchestrator | Thursday 05 February 2026 01:00:10 +0000 (0:00:03.275) 0:01:13.000 ***** 2026-02-05 01:01:56.977088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.977094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.977106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.977112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977202 | orchestrator | 2026-02-05 01:01:56.977208 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-05 01:01:56.977214 | orchestrator | Thursday 05 February 2026 01:00:14 +0000 (0:00:03.410) 0:01:16.410 ***** 2026-02-05 01:01:56.977228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.977235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.977248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.977254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977332 | orchestrator | 2026-02-05 01:01:56.977336 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-05 01:01:56.977340 | orchestrator | Thursday 05 February 2026 01:00:18 +0000 (0:00:03.964) 0:01:20.375 ***** 2026-02-05 01:01:56.977344 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:56.977348 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:01:56.977352 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:01:56.977356 | orchestrator | 2026-02-05 01:01:56.977359 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-05 01:01:56.977363 | orchestrator | Thursday 05 February 2026 01:00:19 +0000 (0:00:00.889) 0:01:21.264 ***** 2026-02-05 01:01:56.977373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.977381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:01:56.977385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977401 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:56.977410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.977418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:01:56.977422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977438 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:01:56.977449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:01:56.977457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:01:56.977461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:01:56.977481 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:01:56.977485 | orchestrator | 2026-02-05 01:01:56.977489 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-05 01:01:56.977494 | orchestrator | Thursday 05 February 2026 01:00:20 +0000 (0:00:01.527) 0:01:22.792 ***** 2026-02-05 01:01:56.977506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.977511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.977515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:01:56.977520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:01:56.977636 | orchestrator | 2026-02-05 01:01:56.977641 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-05 01:01:56.977645 | orchestrator | Thursday 05 February 2026 01:00:25 +0000 (0:00:04.741) 0:01:27.533 ***** 2026-02-05 01:01:56.977650 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:01:56.977654 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:01:56.977659 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:01:56.977663 | orchestrator | 2026-02-05 01:01:56.977668 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-05 01:01:56.977672 | orchestrator | Thursday 05 February 2026 01:00:25 +0000 (0:00:00.486) 0:01:28.020 ***** 2026-02-05 01:01:56.977677 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-05 01:01:56.977682 | orchestrator | 2026-02-05 01:01:56.977686 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-05 01:01:56.977691 | orchestrator | Thursday 05 February 2026 01:00:28 +0000 (0:00:02.389) 0:01:30.409 ***** 2026-02-05 01:01:56.977695 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 01:01:56.977700 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-05 01:01:56.977704 | orchestrator | 2026-02-05 01:01:56.977709 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-05 01:01:56.977716 | orchestrator | Thursday 05 February 2026 01:00:30 +0000 (0:00:02.519) 0:01:32.929 ***** 2026-02-05 01:01:56.977721 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:56.977726 | orchestrator | 2026-02-05 01:01:56.977730 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-05 01:01:56.977735 | orchestrator | Thursday 05 February 2026 01:00:47 +0000 (0:00:16.710) 0:01:49.639 ***** 2026-02-05 01:01:56.977739 | orchestrator | 2026-02-05 01:01:56.977747 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-05 01:01:56.977751 | orchestrator | Thursday 05 February 2026 01:00:47 +0000 (0:00:00.213) 0:01:49.852 ***** 2026-02-05 01:01:56.977755 | orchestrator | 2026-02-05 01:01:56.977760 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-05 01:01:56.977764 | orchestrator | Thursday 05 February 2026 01:00:47 +0000 (0:00:00.049) 0:01:49.902 ***** 2026-02-05 01:01:56.977769 | orchestrator | 2026-02-05 01:01:56.977773 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-05 01:01:56.977778 | orchestrator | Thursday 05 February 2026 01:00:47 +0000 (0:00:00.057) 0:01:49.959 ***** 2026-02-05 01:01:56.977782 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:56.977787 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:01:56.977791 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:01:56.977796 | orchestrator | 2026-02-05 01:01:56.977800 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-05 01:01:56.977804 | orchestrator | Thursday 05 February 2026 01:01:01 +0000 (0:00:13.922) 0:02:03.882 ***** 2026-02-05 01:01:56.977809 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:01:56.977813 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:01:56.977818 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:56.977822 | orchestrator | 2026-02-05 01:01:56.977826 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-05 01:01:56.977831 | orchestrator | Thursday 05 February 2026 01:01:11 +0000 (0:00:09.780) 0:02:13.662 ***** 2026-02-05 01:01:56.977835 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:56.977840 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:01:56.977847 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:01:56.977852 | orchestrator | 2026-02-05 01:01:56.977857 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-05 01:01:56.977861 | orchestrator | Thursday 05 February 2026 01:01:21 +0000 (0:00:10.349) 0:02:24.012 ***** 2026-02-05 01:01:56.977866 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:56.977871 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:01:56.977875 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:01:56.977880 | orchestrator | 2026-02-05 01:01:56.977886 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-05 01:01:56.977893 | orchestrator | Thursday 05 February 2026 01:01:31 +0000 (0:00:10.127) 0:02:34.140 ***** 2026-02-05 01:01:56.977898 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:56.977906 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:01:56.977912 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:01:56.977918 | orchestrator | 2026-02-05 01:01:56.977925 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-05 01:01:56.977932 | orchestrator | Thursday 05 February 2026 01:01:42 +0000 (0:00:10.302) 0:02:44.443 ***** 2026-02-05 01:01:56.977939 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:56.977946 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:01:56.977953 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:01:56.977960 | orchestrator | 2026-02-05 01:01:56.977966 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-05 01:01:56.977972 | orchestrator | Thursday 05 February 2026 01:01:48 +0000 (0:00:05.871) 0:02:50.314 ***** 2026-02-05 01:01:56.977979 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:01:56.977985 | orchestrator | 2026-02-05 01:01:56.977991 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:01:56.977999 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:01:56.978008 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:01:56.978120 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:01:56.978131 | orchestrator | 2026-02-05 01:01:56.978137 | orchestrator | 2026-02-05 01:01:56.978144 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:01:56.978151 | orchestrator | Thursday 05 February 2026 01:01:55 +0000 (0:00:07.151) 0:02:57.465 ***** 2026-02-05 01:01:56.978157 | orchestrator | =============================================================================== 2026-02-05 01:01:56.978163 | orchestrator | designate : Copying over designate.conf -------------------------------- 17.35s 2026-02-05 01:01:56.978169 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.71s 2026-02-05 01:01:56.978176 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.92s 2026-02-05 01:01:56.978182 | orchestrator | designate : Restart designate-central container ------------------------ 10.35s 2026-02-05 01:01:56.978189 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.30s 2026-02-05 01:01:56.978196 | orchestrator | designate : Restart designate-producer container ----------------------- 10.13s 2026-02-05 01:01:56.978202 | orchestrator | designate : Restart designate-api container ----------------------------- 9.78s 2026-02-05 01:01:56.978208 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.16s 2026-02-05 01:01:56.978215 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.15s 2026-02-05 01:01:56.978221 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.56s 2026-02-05 01:01:56.978232 | orchestrator | designate : Copying over config.json files for services ----------------- 6.43s 2026-02-05 01:01:56.978239 | orchestrator | designate : Restart designate-worker container -------------------------- 5.87s 2026-02-05 01:01:56.978255 | orchestrator | designate : Check designate containers ---------------------------------- 4.74s 2026-02-05 01:01:56.978265 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.50s 2026-02-05 01:01:56.978271 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.23s 2026-02-05 01:01:56.978278 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.21s 2026-02-05 01:01:56.978285 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.99s 2026-02-05 01:01:56.978291 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.96s 2026-02-05 01:01:56.978297 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.74s 2026-02-05 01:01:56.978303 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.63s 2026-02-05 01:01:56.978310 | orchestrator | 2026-02-05 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:00.021416 | orchestrator | 2026-02-05 01:02:00 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:00.023281 | orchestrator | 2026-02-05 01:02:00 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:00.026096 | orchestrator | 2026-02-05 01:02:00 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:02:00.027337 | orchestrator | 2026-02-05 01:02:00 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:00.027695 | orchestrator | 2026-02-05 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:03.063299 | orchestrator | 2026-02-05 01:02:03 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:03.065663 | orchestrator | 2026-02-05 01:02:03 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:03.067321 | orchestrator | 2026-02-05 01:02:03 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:02:03.068910 | orchestrator | 2026-02-05 01:02:03 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:03.068988 | orchestrator | 2026-02-05 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:06.107767 | orchestrator | 2026-02-05 01:02:06 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:06.109297 | orchestrator | 2026-02-05 01:02:06 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:06.110593 | orchestrator | 2026-02-05 01:02:06 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:02:06.111711 | orchestrator | 2026-02-05 01:02:06 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:06.111740 | orchestrator | 2026-02-05 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:09.158190 | orchestrator | 2026-02-05 01:02:09 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:09.160180 | orchestrator | 2026-02-05 01:02:09 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:09.162482 | orchestrator | 2026-02-05 01:02:09 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:02:09.164619 | orchestrator | 2026-02-05 01:02:09 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:09.164872 | orchestrator | 2026-02-05 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:12.205190 | orchestrator | 2026-02-05 01:02:12 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:12.205681 | orchestrator | 2026-02-05 01:02:12 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:12.208188 | orchestrator | 2026-02-05 01:02:12 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:02:12.209179 | orchestrator | 2026-02-05 01:02:12 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:12.209219 | orchestrator | 2026-02-05 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:15.238191 | orchestrator | 2026-02-05 01:02:15 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:15.239417 | orchestrator | 2026-02-05 01:02:15 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:15.240850 | orchestrator | 2026-02-05 01:02:15 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:02:15.242182 | orchestrator | 2026-02-05 01:02:15 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:15.242235 | orchestrator | 2026-02-05 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:18.281805 | orchestrator | 2026-02-05 01:02:18 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:18.283636 | orchestrator | 2026-02-05 01:02:18 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:18.285273 | orchestrator | 2026-02-05 01:02:18 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:02:18.286085 | orchestrator | 2026-02-05 01:02:18 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:18.286223 | orchestrator | 2026-02-05 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:21.317370 | orchestrator | 2026-02-05 01:02:21 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:21.318347 | orchestrator | 2026-02-05 01:02:21 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:21.318976 | orchestrator | 2026-02-05 01:02:21 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state STARTED 2026-02-05 01:02:21.319665 | orchestrator | 2026-02-05 01:02:21 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:21.319772 | orchestrator | 2026-02-05 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:24.355930 | orchestrator | 2026-02-05 01:02:24 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:24.358197 | orchestrator | 2026-02-05 01:02:24 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:24.360624 | orchestrator | 2026-02-05 01:02:24 | INFO  | Task 4f5284ad-50b7-4635-9a17-51c004f396a6 is in state SUCCESS 2026-02-05 01:02:24.361267 | orchestrator | 2026-02-05 01:02:24.361333 | orchestrator | 2026-02-05 01:02:24.361342 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:02:24.361349 | orchestrator | 2026-02-05 01:02:24.361355 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:02:24.361362 | orchestrator | Thursday 05 February 2026 01:01:17 +0000 (0:00:00.241) 0:00:00.241 ***** 2026-02-05 01:02:24.361368 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:02:24.361375 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:02:24.361381 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:02:24.361387 | orchestrator | 2026-02-05 01:02:24.361392 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:02:24.361399 | orchestrator | Thursday 05 February 2026 01:01:17 +0000 (0:00:00.272) 0:00:00.513 ***** 2026-02-05 01:02:24.361405 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-05 01:02:24.361435 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-05 01:02:24.361443 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-05 01:02:24.361449 | orchestrator | 2026-02-05 01:02:24.361454 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-05 01:02:24.361460 | orchestrator | 2026-02-05 01:02:24.361465 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-05 01:02:24.361471 | orchestrator | Thursday 05 February 2026 01:01:17 +0000 (0:00:00.359) 0:00:00.873 ***** 2026-02-05 01:02:24.361478 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:02:24.361486 | orchestrator | 2026-02-05 01:02:24.361492 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-05 01:02:24.361499 | orchestrator | Thursday 05 February 2026 01:01:18 +0000 (0:00:00.486) 0:00:01.360 ***** 2026-02-05 01:02:24.361504 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-05 01:02:24.361508 | orchestrator | 2026-02-05 01:02:24.361512 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-05 01:02:24.361516 | orchestrator | Thursday 05 February 2026 01:01:22 +0000 (0:00:04.018) 0:00:05.378 ***** 2026-02-05 01:02:24.361520 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-05 01:02:24.361525 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-05 01:02:24.361529 | orchestrator | 2026-02-05 01:02:24.361532 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-05 01:02:24.361536 | orchestrator | Thursday 05 February 2026 01:01:28 +0000 (0:00:06.120) 0:00:11.499 ***** 2026-02-05 01:02:24.361541 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:02:24.361545 | orchestrator | 2026-02-05 01:02:24.361549 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-05 01:02:24.361572 | orchestrator | Thursday 05 February 2026 01:01:31 +0000 (0:00:03.045) 0:00:14.545 ***** 2026-02-05 01:02:24.361576 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:02:24.361580 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-05 01:02:24.361584 | orchestrator | 2026-02-05 01:02:24.361587 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-05 01:02:24.361591 | orchestrator | Thursday 05 February 2026 01:01:35 +0000 (0:00:03.679) 0:00:18.224 ***** 2026-02-05 01:02:24.361595 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:02:24.361599 | orchestrator | 2026-02-05 01:02:24.361603 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-05 01:02:24.361607 | orchestrator | Thursday 05 February 2026 01:01:38 +0000 (0:00:03.289) 0:00:21.514 ***** 2026-02-05 01:02:24.361610 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-05 01:02:24.361614 | orchestrator | 2026-02-05 01:02:24.361628 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-05 01:02:24.361632 | orchestrator | Thursday 05 February 2026 01:01:42 +0000 (0:00:04.027) 0:00:25.541 ***** 2026-02-05 01:02:24.361636 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:02:24.361639 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:02:24.361643 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:02:24.361647 | orchestrator | 2026-02-05 01:02:24.361650 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-05 01:02:24.361654 | orchestrator | Thursday 05 February 2026 01:01:42 +0000 (0:00:00.439) 0:00:25.981 ***** 2026-02-05 01:02:24.361660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.361684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.361689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.361693 | orchestrator | 2026-02-05 01:02:24.361697 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-05 01:02:24.361701 | orchestrator | Thursday 05 February 2026 01:01:44 +0000 (0:00:01.034) 0:00:27.015 ***** 2026-02-05 01:02:24.361704 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:02:24.361708 | orchestrator | 2026-02-05 01:02:24.361712 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-05 01:02:24.361715 | orchestrator | Thursday 05 February 2026 01:01:44 +0000 (0:00:00.102) 0:00:27.117 ***** 2026-02-05 01:02:24.361719 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:02:24.361723 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:02:24.361727 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:02:24.361730 | orchestrator | 2026-02-05 01:02:24.361734 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-05 01:02:24.361738 | orchestrator | Thursday 05 February 2026 01:01:44 +0000 (0:00:00.405) 0:00:27.522 ***** 2026-02-05 01:02:24.361742 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:02:24.361745 | orchestrator | 2026-02-05 01:02:24.361749 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-05 01:02:24.361756 | orchestrator | Thursday 05 February 2026 01:01:44 +0000 (0:00:00.451) 0:00:27.974 ***** 2026-02-05 01:02:24.361760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.361772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.361776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.361781 | orchestrator | 2026-02-05 01:02:24.361784 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-05 01:02:24.361788 | orchestrator | Thursday 05 February 2026 01:01:46 +0000 (0:00:01.299) 0:00:29.273 ***** 2026-02-05 01:02:24.361792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:02:24.361796 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:02:24.361802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:02:24.361810 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:02:24.361818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:02:24.361822 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:02:24.361827 | orchestrator | 2026-02-05 01:02:24.361831 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-05 01:02:24.361836 | orchestrator | Thursday 05 February 2026 01:01:46 +0000 (0:00:00.675) 0:00:29.949 ***** 2026-02-05 01:02:24.361840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:02:24.361845 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:02:24.361850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:02:24.361855 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:02:24.361866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:02:24.361871 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:02:24.361876 | orchestrator | 2026-02-05 01:02:24.361880 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-05 01:02:24.361885 | orchestrator | Thursday 05 February 2026 01:01:47 +0000 (0:00:00.621) 0:00:30.571 ***** 2026-02-05 01:02:24.361894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.361899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.361904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.361912 | orchestrator | 2026-02-05 01:02:24.361917 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-05 01:02:24.361921 | orchestrator | Thursday 05 February 2026 01:01:48 +0000 (0:00:01.240) 0:00:31.811 ***** 2026-02-05 01:02:24.361928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.361933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.361942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.361947 | orchestrator | 2026-02-05 01:02:24.361952 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-05 01:02:24.361956 | orchestrator | Thursday 05 February 2026 01:01:51 +0000 (0:00:02.379) 0:00:34.191 ***** 2026-02-05 01:02:24.361961 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-05 01:02:24.361965 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-05 01:02:24.361970 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-05 01:02:24.361975 | orchestrator | 2026-02-05 01:02:24.361979 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-05 01:02:24.361984 | orchestrator | Thursday 05 February 2026 01:01:52 +0000 (0:00:01.714) 0:00:35.906 ***** 2026-02-05 01:02:24.361988 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:02:24.361996 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:02:24.362001 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:02:24.362156 | orchestrator | 2026-02-05 01:02:24.362162 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-05 01:02:24.362167 | orchestrator | Thursday 05 February 2026 01:01:54 +0000 (0:00:01.295) 0:00:37.201 ***** 2026-02-05 01:02:24.362175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:02:24.362180 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:02:24.362184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:02:24.362188 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:02:24.362197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:02:24.362202 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:02:24.362205 | orchestrator | 2026-02-05 01:02:24.362209 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-05 01:02:24.362213 | orchestrator | Thursday 05 February 2026 01:01:54 +0000 (0:00:00.578) 0:00:37.780 ***** 2026-02-05 01:02:24.362217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.362225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.362235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:24.362240 | orchestrator | 2026-02-05 01:02:24.362244 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-05 01:02:24.362247 | orchestrator | Thursday 05 February 2026 01:01:55 +0000 (0:00:01.117) 0:00:38.897 ***** 2026-02-05 01:02:24.362251 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:02:24.362255 | orchestrator | 2026-02-05 01:02:24.362258 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-05 01:02:24.362262 | orchestrator | Thursday 05 February 2026 01:01:58 +0000 (0:00:02.469) 0:00:41.366 ***** 2026-02-05 01:02:24.362266 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:02:24.362270 | orchestrator | 2026-02-05 01:02:24.362273 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-05 01:02:24.362277 | orchestrator | Thursday 05 February 2026 01:02:01 +0000 (0:00:03.086) 0:00:44.453 ***** 2026-02-05 01:02:24.362283 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:02:24.362287 | orchestrator | 2026-02-05 01:02:24.362291 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-05 01:02:24.362295 | orchestrator | Thursday 05 February 2026 01:02:13 +0000 (0:00:12.235) 0:00:56.689 ***** 2026-02-05 01:02:24.362299 | orchestrator | 2026-02-05 01:02:24.362302 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-05 01:02:24.362306 | orchestrator | Thursday 05 February 2026 01:02:13 +0000 (0:00:00.059) 0:00:56.749 ***** 2026-02-05 01:02:24.362310 | orchestrator | 2026-02-05 01:02:24.362314 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-05 01:02:24.362317 | orchestrator | Thursday 05 February 2026 01:02:13 +0000 (0:00:00.064) 0:00:56.813 ***** 2026-02-05 01:02:24.362324 | orchestrator | 2026-02-05 01:02:24.362328 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-05 01:02:24.362332 | orchestrator | Thursday 05 February 2026 01:02:13 +0000 (0:00:00.062) 0:00:56.876 ***** 2026-02-05 01:02:24.362335 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:02:24.362339 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:02:24.362343 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:02:24.362347 | orchestrator | 2026-02-05 01:02:24.362350 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:02:24.362355 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:02:24.362359 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 01:02:24.362363 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 01:02:24.362367 | orchestrator | 2026-02-05 01:02:24.362371 | orchestrator | 2026-02-05 01:02:24.362374 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:02:24.362378 | orchestrator | Thursday 05 February 2026 01:02:24 +0000 (0:00:10.197) 0:01:07.073 ***** 2026-02-05 01:02:24.362382 | orchestrator | =============================================================================== 2026-02-05 01:02:24.362386 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.24s 2026-02-05 01:02:24.362389 | orchestrator | placement : Restart placement-api container ---------------------------- 10.20s 2026-02-05 01:02:24.362393 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.12s 2026-02-05 01:02:24.362397 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.03s 2026-02-05 01:02:24.362401 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.02s 2026-02-05 01:02:24.362405 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.68s 2026-02-05 01:02:24.362408 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.29s 2026-02-05 01:02:24.362412 | orchestrator | placement : Creating placement databases user and setting permissions --- 3.09s 2026-02-05 01:02:24.362416 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.05s 2026-02-05 01:02:24.362420 | orchestrator | placement : Creating placement databases -------------------------------- 2.47s 2026-02-05 01:02:24.362423 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.38s 2026-02-05 01:02:24.362427 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.71s 2026-02-05 01:02:24.362431 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.30s 2026-02-05 01:02:24.362437 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.30s 2026-02-05 01:02:24.362441 | orchestrator | placement : Copying over config.json files for services ----------------- 1.24s 2026-02-05 01:02:24.362444 | orchestrator | placement : Check placement containers ---------------------------------- 1.12s 2026-02-05 01:02:24.362449 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.03s 2026-02-05 01:02:24.362455 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.68s 2026-02-05 01:02:24.362461 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.62s 2026-02-05 01:02:24.362467 | orchestrator | placement : Copying over existing policy file --------------------------- 0.58s 2026-02-05 01:02:24.362473 | orchestrator | 2026-02-05 01:02:24 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:24.362479 | orchestrator | 2026-02-05 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:27.400773 | orchestrator | 2026-02-05 01:02:27 | INFO  | Task fc768785-0980-429e-8a7a-bae6192d4ac2 is in state STARTED 2026-02-05 01:02:27.402229 | orchestrator | 2026-02-05 01:02:27 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:27.405352 | orchestrator | 2026-02-05 01:02:27 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:27.407038 | orchestrator | 2026-02-05 01:02:27 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:27.407078 | orchestrator | 2026-02-05 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:30.448977 | orchestrator | 2026-02-05 01:02:30 | INFO  | Task fc768785-0980-429e-8a7a-bae6192d4ac2 is in state SUCCESS 2026-02-05 01:02:30.452371 | orchestrator | 2026-02-05 01:02:30 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:30.455376 | orchestrator | 2026-02-05 01:02:30 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:30.456852 | orchestrator | 2026-02-05 01:02:30 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:30.457132 | orchestrator | 2026-02-05 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:33.508722 | orchestrator | 2026-02-05 01:02:33 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:33.510445 | orchestrator | 2026-02-05 01:02:33 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:33.513134 | orchestrator | 2026-02-05 01:02:33 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:33.514198 | orchestrator | 2026-02-05 01:02:33 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:02:33.514519 | orchestrator | 2026-02-05 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:36.547522 | orchestrator | 2026-02-05 01:02:36 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:36.548724 | orchestrator | 2026-02-05 01:02:36 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:36.549716 | orchestrator | 2026-02-05 01:02:36 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:36.550424 | orchestrator | 2026-02-05 01:02:36 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:02:36.551583 | orchestrator | 2026-02-05 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:39.588412 | orchestrator | 2026-02-05 01:02:39 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:39.589003 | orchestrator | 2026-02-05 01:02:39 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:39.591640 | orchestrator | 2026-02-05 01:02:39 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:39.592291 | orchestrator | 2026-02-05 01:02:39 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:02:39.592328 | orchestrator | 2026-02-05 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:42.620771 | orchestrator | 2026-02-05 01:02:42 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:42.621293 | orchestrator | 2026-02-05 01:02:42 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:42.621326 | orchestrator | 2026-02-05 01:02:42 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:42.621906 | orchestrator | 2026-02-05 01:02:42 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:02:42.622001 | orchestrator | 2026-02-05 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:45.655869 | orchestrator | 2026-02-05 01:02:45 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:45.656299 | orchestrator | 2026-02-05 01:02:45 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:45.657111 | orchestrator | 2026-02-05 01:02:45 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:45.660490 | orchestrator | 2026-02-05 01:02:45 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:02:45.660573 | orchestrator | 2026-02-05 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:48.685640 | orchestrator | 2026-02-05 01:02:48 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:48.686460 | orchestrator | 2026-02-05 01:02:48 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:48.688294 | orchestrator | 2026-02-05 01:02:48 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:48.690241 | orchestrator | 2026-02-05 01:02:48 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:02:48.690275 | orchestrator | 2026-02-05 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:51.713125 | orchestrator | 2026-02-05 01:02:51 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:51.713481 | orchestrator | 2026-02-05 01:02:51 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:51.714325 | orchestrator | 2026-02-05 01:02:51 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:51.715036 | orchestrator | 2026-02-05 01:02:51 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:02:51.715064 | orchestrator | 2026-02-05 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:54.748403 | orchestrator | 2026-02-05 01:02:54 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:54.750607 | orchestrator | 2026-02-05 01:02:54 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:54.755179 | orchestrator | 2026-02-05 01:02:54 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:54.757073 | orchestrator | 2026-02-05 01:02:54 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:02:54.757621 | orchestrator | 2026-02-05 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:57.779836 | orchestrator | 2026-02-05 01:02:57 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:02:57.781082 | orchestrator | 2026-02-05 01:02:57 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:02:57.781666 | orchestrator | 2026-02-05 01:02:57 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:02:57.782528 | orchestrator | 2026-02-05 01:02:57 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:02:57.783193 | orchestrator | 2026-02-05 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:00.817606 | orchestrator | 2026-02-05 01:03:00 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:00.818097 | orchestrator | 2026-02-05 01:03:00 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:00.818768 | orchestrator | 2026-02-05 01:03:00 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state STARTED 2026-02-05 01:03:00.819293 | orchestrator | 2026-02-05 01:03:00 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:00.819317 | orchestrator | 2026-02-05 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:03.844291 | orchestrator | 2026-02-05 01:03:03 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:03.844392 | orchestrator | 2026-02-05 01:03:03 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:03.845658 | orchestrator | 2026-02-05 01:03:03 | INFO  | Task 41853bbf-1370-49df-b391-e7cf2b80eb66 is in state SUCCESS 2026-02-05 01:03:03.845694 | orchestrator | 2026-02-05 01:03:03.845700 | orchestrator | 2026-02-05 01:03:03.845704 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:03:03.845709 | orchestrator | 2026-02-05 01:03:03.845721 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:03:03.845726 | orchestrator | Thursday 05 February 2026 01:02:27 +0000 (0:00:00.132) 0:00:00.132 ***** 2026-02-05 01:03:03.845730 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:03:03.845734 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:03:03.845738 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:03:03.845742 | orchestrator | 2026-02-05 01:03:03.845745 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:03:03.845749 | orchestrator | Thursday 05 February 2026 01:02:28 +0000 (0:00:00.253) 0:00:00.386 ***** 2026-02-05 01:03:03.845753 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-05 01:03:03.845757 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-05 01:03:03.845761 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-05 01:03:03.845765 | orchestrator | 2026-02-05 01:03:03.845769 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-02-05 01:03:03.845773 | orchestrator | 2026-02-05 01:03:03.845776 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-02-05 01:03:03.845780 | orchestrator | Thursday 05 February 2026 01:02:28 +0000 (0:00:00.557) 0:00:00.943 ***** 2026-02-05 01:03:03.845784 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:03:03.845788 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:03:03.845792 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:03:03.845796 | orchestrator | 2026-02-05 01:03:03.845799 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:03:03.845804 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:03:03.845808 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:03:03.845813 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:03:03.845817 | orchestrator | 2026-02-05 01:03:03.845820 | orchestrator | 2026-02-05 01:03:03.845824 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:03:03.845828 | orchestrator | Thursday 05 February 2026 01:02:29 +0000 (0:00:00.622) 0:00:01.565 ***** 2026-02-05 01:03:03.845832 | orchestrator | =============================================================================== 2026-02-05 01:03:03.845836 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.62s 2026-02-05 01:03:03.845840 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-02-05 01:03:03.845843 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2026-02-05 01:03:03.845847 | orchestrator | 2026-02-05 01:03:03.846882 | orchestrator | 2026-02-05 01:03:03.846905 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:03:03.846923 | orchestrator | 2026-02-05 01:03:03.846930 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:03:03.846937 | orchestrator | Thursday 05 February 2026 00:58:57 +0000 (0:00:00.224) 0:00:00.224 ***** 2026-02-05 01:03:03.846944 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:03:03.846951 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:03:03.846958 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:03:03.846964 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:03:03.846971 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:03:03.847001 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:03:03.847007 | orchestrator | 2026-02-05 01:03:03.847012 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:03:03.847016 | orchestrator | Thursday 05 February 2026 00:58:58 +0000 (0:00:00.629) 0:00:00.854 ***** 2026-02-05 01:03:03.847021 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-05 01:03:03.847027 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-05 01:03:03.847034 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-05 01:03:03.847040 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-05 01:03:03.847046 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-05 01:03:03.847069 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-05 01:03:03.847076 | orchestrator | 2026-02-05 01:03:03.847083 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-05 01:03:03.847090 | orchestrator | 2026-02-05 01:03:03.847097 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-05 01:03:03.847103 | orchestrator | Thursday 05 February 2026 00:58:59 +0000 (0:00:00.556) 0:00:01.411 ***** 2026-02-05 01:03:03.847108 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:03:03.847113 | orchestrator | 2026-02-05 01:03:03.847118 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-05 01:03:03.847125 | orchestrator | Thursday 05 February 2026 00:59:00 +0000 (0:00:01.005) 0:00:02.417 ***** 2026-02-05 01:03:03.847131 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:03:03.847137 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:03:03.847158 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:03:03.847164 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:03:03.847170 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:03:03.847176 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:03:03.847182 | orchestrator | 2026-02-05 01:03:03.847188 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-05 01:03:03.847227 | orchestrator | Thursday 05 February 2026 00:59:01 +0000 (0:00:01.195) 0:00:03.612 ***** 2026-02-05 01:03:03.847234 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:03:03.847240 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:03:03.847262 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:03:03.847268 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:03:03.847275 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:03:03.847280 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:03:03.847286 | orchestrator | 2026-02-05 01:03:03.847292 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-05 01:03:03.847326 | orchestrator | Thursday 05 February 2026 00:59:02 +0000 (0:00:00.994) 0:00:04.607 ***** 2026-02-05 01:03:03.847335 | orchestrator | ok: [testbed-node-0] => { 2026-02-05 01:03:03.847342 | orchestrator |  "changed": false, 2026-02-05 01:03:03.847349 | orchestrator |  "msg": "All assertions passed" 2026-02-05 01:03:03.847356 | orchestrator | } 2026-02-05 01:03:03.847363 | orchestrator | ok: [testbed-node-1] => { 2026-02-05 01:03:03.847393 | orchestrator |  "changed": false, 2026-02-05 01:03:03.847400 | orchestrator |  "msg": "All assertions passed" 2026-02-05 01:03:03.847407 | orchestrator | } 2026-02-05 01:03:03.847413 | orchestrator | ok: [testbed-node-2] => { 2026-02-05 01:03:03.847419 | orchestrator |  "changed": false, 2026-02-05 01:03:03.847433 | orchestrator |  "msg": "All assertions passed" 2026-02-05 01:03:03.847440 | orchestrator | } 2026-02-05 01:03:03.847446 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 01:03:03.847452 | orchestrator |  "changed": false, 2026-02-05 01:03:03.847459 | orchestrator |  "msg": "All assertions passed" 2026-02-05 01:03:03.847465 | orchestrator | } 2026-02-05 01:03:03.847471 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 01:03:03.847505 | orchestrator |  "changed": false, 2026-02-05 01:03:03.847511 | orchestrator |  "msg": "All assertions passed" 2026-02-05 01:03:03.847514 | orchestrator | } 2026-02-05 01:03:03.847518 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 01:03:03.847522 | orchestrator |  "changed": false, 2026-02-05 01:03:03.847526 | orchestrator |  "msg": "All assertions passed" 2026-02-05 01:03:03.847572 | orchestrator | } 2026-02-05 01:03:03.847577 | orchestrator | 2026-02-05 01:03:03.847581 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-05 01:03:03.847585 | orchestrator | Thursday 05 February 2026 00:59:02 +0000 (0:00:00.719) 0:00:05.327 ***** 2026-02-05 01:03:03.847589 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.847592 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.847596 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.847600 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.847604 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.847609 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.847615 | orchestrator | 2026-02-05 01:03:03.847637 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-05 01:03:03.847644 | orchestrator | Thursday 05 February 2026 00:59:03 +0000 (0:00:00.577) 0:00:05.904 ***** 2026-02-05 01:03:03.847650 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-05 01:03:03.847656 | orchestrator | 2026-02-05 01:03:03.847663 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-05 01:03:03.847669 | orchestrator | Thursday 05 February 2026 00:59:07 +0000 (0:00:03.781) 0:00:09.686 ***** 2026-02-05 01:03:03.847676 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-05 01:03:03.847683 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-05 01:03:03.847690 | orchestrator | 2026-02-05 01:03:03.847705 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-05 01:03:03.847711 | orchestrator | Thursday 05 February 2026 00:59:14 +0000 (0:00:06.969) 0:00:16.656 ***** 2026-02-05 01:03:03.847717 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:03:03.847723 | orchestrator | 2026-02-05 01:03:03.847729 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-05 01:03:03.847736 | orchestrator | Thursday 05 February 2026 00:59:17 +0000 (0:00:03.554) 0:00:20.210 ***** 2026-02-05 01:03:03.847742 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:03:03.847748 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-05 01:03:03.847754 | orchestrator | 2026-02-05 01:03:03.847768 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-05 01:03:03.847773 | orchestrator | Thursday 05 February 2026 00:59:22 +0000 (0:00:04.298) 0:00:24.508 ***** 2026-02-05 01:03:03.847784 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:03:03.847791 | orchestrator | 2026-02-05 01:03:03.847797 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-05 01:03:03.847804 | orchestrator | Thursday 05 February 2026 00:59:26 +0000 (0:00:04.006) 0:00:28.515 ***** 2026-02-05 01:03:03.847810 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-05 01:03:03.847817 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-05 01:03:03.847823 | orchestrator | 2026-02-05 01:03:03.847829 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-05 01:03:03.847835 | orchestrator | Thursday 05 February 2026 00:59:34 +0000 (0:00:08.441) 0:00:36.957 ***** 2026-02-05 01:03:03.847847 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.847854 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.847860 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.847866 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.847873 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.847879 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.847886 | orchestrator | 2026-02-05 01:03:03.847892 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-05 01:03:03.847898 | orchestrator | Thursday 05 February 2026 00:59:35 +0000 (0:00:00.776) 0:00:37.733 ***** 2026-02-05 01:03:03.847904 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.847910 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.847917 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.847923 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.847929 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.847935 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.847942 | orchestrator | 2026-02-05 01:03:03.847948 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-05 01:03:03.847954 | orchestrator | Thursday 05 February 2026 00:59:37 +0000 (0:00:02.190) 0:00:39.924 ***** 2026-02-05 01:03:03.847961 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:03:03.847967 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:03:03.847973 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:03:03.847985 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:03:03.847990 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:03:03.847994 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:03:03.848002 | orchestrator | 2026-02-05 01:03:03.848006 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-05 01:03:03.848013 | orchestrator | Thursday 05 February 2026 00:59:39 +0000 (0:00:01.918) 0:00:41.842 ***** 2026-02-05 01:03:03.848017 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.848020 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.848024 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.848028 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.848032 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.848035 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.848039 | orchestrator | 2026-02-05 01:03:03.848043 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-05 01:03:03.848047 | orchestrator | Thursday 05 February 2026 00:59:41 +0000 (0:00:02.245) 0:00:44.088 ***** 2026-02-05 01:03:03.848052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.848064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.848071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.848075 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.848082 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.848087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.848091 | orchestrator | 2026-02-05 01:03:03.848095 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-05 01:03:03.848099 | orchestrator | Thursday 05 February 2026 00:59:44 +0000 (0:00:02.963) 0:00:47.051 ***** 2026-02-05 01:03:03.848102 | orchestrator | [WARNING]: Skipped 2026-02-05 01:03:03.848107 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-05 01:03:03.848113 | orchestrator | due to this access issue: 2026-02-05 01:03:03.848117 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-05 01:03:03.848121 | orchestrator | a directory 2026-02-05 01:03:03.848125 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:03:03.848129 | orchestrator | 2026-02-05 01:03:03.848135 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-05 01:03:03.848138 | orchestrator | Thursday 05 February 2026 00:59:45 +0000 (0:00:00.774) 0:00:47.826 ***** 2026-02-05 01:03:03.848143 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:03:03.848147 | orchestrator | 2026-02-05 01:03:03.848151 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-05 01:03:03.848155 | orchestrator | Thursday 05 February 2026 00:59:46 +0000 (0:00:01.059) 0:00:48.885 ***** 2026-02-05 01:03:03.848159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.848163 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.848169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.848173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.848183 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.848187 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.848191 | orchestrator | 2026-02-05 01:03:03.848195 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-05 01:03:03.848198 | orchestrator | Thursday 05 February 2026 00:59:50 +0000 (0:00:03.583) 0:00:52.468 ***** 2026-02-05 01:03:03.848204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.848208 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.848212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.848219 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.848226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.848230 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.848234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.848238 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.848242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.848246 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.848252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.848256 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.848260 | orchestrator | 2026-02-05 01:03:03.848263 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-05 01:03:03.848267 | orchestrator | Thursday 05 February 2026 00:59:52 +0000 (0:00:02.863) 0:00:55.331 ***** 2026-02-05 01:03:03.848276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.848280 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.848287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.848292 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.848296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.848299 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.848305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.848309 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.848313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.848319 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.848323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.848327 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.848331 | orchestrator | 2026-02-05 01:03:03.848335 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-05 01:03:03.848341 | orchestrator | Thursday 05 February 2026 00:59:55 +0000 (0:00:02.849) 0:00:58.181 ***** 2026-02-05 01:03:03.848345 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.848352 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.848358 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.848365 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.848372 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.848378 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.848384 | orchestrator | 2026-02-05 01:03:03.848391 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-05 01:03:03.848398 | orchestrator | Thursday 05 February 2026 00:59:58 +0000 (0:00:02.632) 0:01:00.813 ***** 2026-02-05 01:03:03.848405 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.848412 | orchestrator | 2026-02-05 01:03:03.848419 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-05 01:03:03.848425 | orchestrator | Thursday 05 February 2026 00:59:58 +0000 (0:00:00.132) 0:01:00.946 ***** 2026-02-05 01:03:03.848432 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.848439 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.848445 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.848451 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.848458 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.848464 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.848471 | orchestrator | 2026-02-05 01:03:03.848475 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-05 01:03:03.848479 | orchestrator | Thursday 05 February 2026 00:59:59 +0000 (0:00:00.654) 0:01:01.601 ***** 2026-02-05 01:03:03.848483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.848490 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.848497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.848501 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.848505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.848781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.848796 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.848802 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.848806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.848813 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.848818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.848835 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.848843 | orchestrator | 2026-02-05 01:03:03.848853 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-05 01:03:03.848860 | orchestrator | Thursday 05 February 2026 01:00:01 +0000 (0:00:02.407) 0:01:04.008 ***** 2026-02-05 01:03:03.848866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.848877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.848884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.848890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.848904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.848912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.848918 | orchestrator | 2026-02-05 01:03:03.848925 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-05 01:03:03.848931 | orchestrator | Thursday 05 February 2026 01:00:05 +0000 (0:00:03.835) 0:01:07.843 ***** 2026-02-05 01:03:03.848942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.848949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.848959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.848969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.848977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.848987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.848994 | orchestrator | 2026-02-05 01:03:03.849000 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-05 01:03:03.849005 | orchestrator | Thursday 05 February 2026 01:00:10 +0000 (0:00:05.457) 0:01:13.300 ***** 2026-02-05 01:03:03.849009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.849016 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.849027 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.849035 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.849043 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.849056 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.849064 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849068 | orchestrator | 2026-02-05 01:03:03.849071 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-05 01:03:03.849075 | orchestrator | Thursday 05 February 2026 01:00:13 +0000 (0:00:02.469) 0:01:15.769 ***** 2026-02-05 01:03:03.849079 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:03.849083 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849087 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849091 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849094 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:03:03.849098 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:03:03.849102 | orchestrator | 2026-02-05 01:03:03.849106 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-05 01:03:03.849109 | orchestrator | Thursday 05 February 2026 01:00:17 +0000 (0:00:03.655) 0:01:19.425 ***** 2026-02-05 01:03:03.849115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.849119 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.849141 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.849156 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.849165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.849171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.849175 | orchestrator | 2026-02-05 01:03:03.849179 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-05 01:03:03.849183 | orchestrator | Thursday 05 February 2026 01:00:22 +0000 (0:00:05.023) 0:01:24.448 ***** 2026-02-05 01:03:03.849187 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849191 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849194 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849198 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849202 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849206 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849209 | orchestrator | 2026-02-05 01:03:03.849213 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-05 01:03:03.849220 | orchestrator | Thursday 05 February 2026 01:00:24 +0000 (0:00:02.173) 0:01:26.621 ***** 2026-02-05 01:03:03.849223 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849227 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849231 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849235 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849239 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849242 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849246 | orchestrator | 2026-02-05 01:03:03.849250 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-05 01:03:03.849254 | orchestrator | Thursday 05 February 2026 01:00:26 +0000 (0:00:02.205) 0:01:28.827 ***** 2026-02-05 01:03:03.849260 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849263 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849267 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849271 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849275 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849278 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849282 | orchestrator | 2026-02-05 01:03:03.849286 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-05 01:03:03.849290 | orchestrator | Thursday 05 February 2026 01:00:28 +0000 (0:00:01.785) 0:01:30.612 ***** 2026-02-05 01:03:03.849294 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849298 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849301 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849305 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849309 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849313 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849317 | orchestrator | 2026-02-05 01:03:03.849320 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-05 01:03:03.849324 | orchestrator | Thursday 05 February 2026 01:00:30 +0000 (0:00:01.766) 0:01:32.378 ***** 2026-02-05 01:03:03.849328 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849332 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849336 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849339 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849343 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849347 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849351 | orchestrator | 2026-02-05 01:03:03.849354 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-05 01:03:03.849358 | orchestrator | Thursday 05 February 2026 01:00:32 +0000 (0:00:02.092) 0:01:34.470 ***** 2026-02-05 01:03:03.849362 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849366 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849370 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849373 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849377 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849381 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849385 | orchestrator | 2026-02-05 01:03:03.849388 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-05 01:03:03.849392 | orchestrator | Thursday 05 February 2026 01:00:33 +0000 (0:00:01.801) 0:01:36.272 ***** 2026-02-05 01:03:03.849397 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 01:03:03.849402 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 01:03:03.849407 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849411 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849416 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 01:03:03.849420 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849425 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 01:03:03.849432 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849437 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 01:03:03.849442 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849448 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 01:03:03.849453 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849458 | orchestrator | 2026-02-05 01:03:03.849462 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-05 01:03:03.849467 | orchestrator | Thursday 05 February 2026 01:00:35 +0000 (0:00:01.789) 0:01:38.061 ***** 2026-02-05 01:03:03.849472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.849476 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.849489 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.849499 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.849510 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.849522 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.849563 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849568 | orchestrator | 2026-02-05 01:03:03.849573 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-05 01:03:03.849578 | orchestrator | Thursday 05 February 2026 01:00:37 +0000 (0:00:01.541) 0:01:39.603 ***** 2026-02-05 01:03:03.849586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.849591 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.849604 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.849616 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.849626 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.849637 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.849647 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849652 | orchestrator | 2026-02-05 01:03:03.849656 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-05 01:03:03.849665 | orchestrator | Thursday 05 February 2026 01:00:39 +0000 (0:00:01.812) 0:01:41.415 ***** 2026-02-05 01:03:03.849670 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849674 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849679 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849683 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849688 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849692 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849697 | orchestrator | 2026-02-05 01:03:03.849702 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-05 01:03:03.849707 | orchestrator | Thursday 05 February 2026 01:00:40 +0000 (0:00:01.685) 0:01:43.101 ***** 2026-02-05 01:03:03.849711 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849716 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849721 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849725 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:03:03.849730 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:03:03.849735 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:03:03.849740 | orchestrator | 2026-02-05 01:03:03.849745 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-05 01:03:03.849749 | orchestrator | Thursday 05 February 2026 01:00:43 +0000 (0:00:02.974) 0:01:46.076 ***** 2026-02-05 01:03:03.849754 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849759 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849764 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849770 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849776 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849781 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849786 | orchestrator | 2026-02-05 01:03:03.849792 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-05 01:03:03.849799 | orchestrator | Thursday 05 February 2026 01:00:45 +0000 (0:00:01.708) 0:01:47.784 ***** 2026-02-05 01:03:03.849811 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849817 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849823 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849829 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849835 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849840 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849847 | orchestrator | 2026-02-05 01:03:03.849853 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-05 01:03:03.849858 | orchestrator | Thursday 05 February 2026 01:00:47 +0000 (0:00:02.332) 0:01:50.117 ***** 2026-02-05 01:03:03.849863 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849869 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849874 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849880 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849885 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849890 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849896 | orchestrator | 2026-02-05 01:03:03.849901 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-05 01:03:03.849906 | orchestrator | Thursday 05 February 2026 01:00:50 +0000 (0:00:02.722) 0:01:52.839 ***** 2026-02-05 01:03:03.849912 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849917 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849922 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849927 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849932 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849938 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.849944 | orchestrator | 2026-02-05 01:03:03.849949 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-05 01:03:03.849954 | orchestrator | Thursday 05 February 2026 01:00:52 +0000 (0:00:02.287) 0:01:55.127 ***** 2026-02-05 01:03:03.849960 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.849972 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.849978 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.849984 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.849989 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.849995 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.850000 | orchestrator | 2026-02-05 01:03:03.850006 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-05 01:03:03.850060 | orchestrator | Thursday 05 February 2026 01:00:54 +0000 (0:00:01.613) 0:01:56.740 ***** 2026-02-05 01:03:03.850071 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.850077 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.850083 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.850088 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.850093 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.850099 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.850104 | orchestrator | 2026-02-05 01:03:03.850109 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-05 01:03:03.850122 | orchestrator | Thursday 05 February 2026 01:00:56 +0000 (0:00:01.918) 0:01:58.659 ***** 2026-02-05 01:03:03.850128 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.850133 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.850139 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.850144 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.850149 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.850154 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.850160 | orchestrator | 2026-02-05 01:03:03.850166 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-05 01:03:03.850172 | orchestrator | Thursday 05 February 2026 01:00:57 +0000 (0:00:01.460) 0:02:00.119 ***** 2026-02-05 01:03:03.850178 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 01:03:03.850184 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.850190 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 01:03:03.850195 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.850204 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 01:03:03.850210 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.850216 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 01:03:03.850222 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.850227 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 01:03:03.850232 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.850238 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 01:03:03.850243 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.850248 | orchestrator | 2026-02-05 01:03:03.850255 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-05 01:03:03.850260 | orchestrator | Thursday 05 February 2026 01:00:59 +0000 (0:00:01.855) 0:02:01.975 ***** 2026-02-05 01:03:03.850270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.850283 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.850288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.850294 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.850304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:03:03.850311 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.850319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.850326 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.850333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.850339 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.850349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:03:03.850364 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.850371 | orchestrator | 2026-02-05 01:03:03.850378 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-05 01:03:03.850384 | orchestrator | Thursday 05 February 2026 01:01:01 +0000 (0:00:01.604) 0:02:03.579 ***** 2026-02-05 01:03:03.850391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.850403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.850410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:03:03.850416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.850427 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.850434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:03:03.850440 | orchestrator | 2026-02-05 01:03:03.850445 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-05 01:03:03.850451 | orchestrator | Thursday 05 February 2026 01:01:05 +0000 (0:00:04.217) 0:02:07.797 ***** 2026-02-05 01:03:03.850456 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:03.850463 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:03.850469 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:03.850475 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:03:03.850480 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:03:03.850489 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:03:03.850497 | orchestrator | 2026-02-05 01:03:03.850503 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-05 01:03:03.850510 | orchestrator | Thursday 05 February 2026 01:01:05 +0000 (0:00:00.503) 0:02:08.300 ***** 2026-02-05 01:03:03.850516 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:03.850522 | orchestrator | 2026-02-05 01:03:03.850541 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-05 01:03:03.850568 | orchestrator | Thursday 05 February 2026 01:01:08 +0000 (0:00:02.287) 0:02:10.587 ***** 2026-02-05 01:03:03.850575 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:03.850581 | orchestrator | 2026-02-05 01:03:03.850587 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-05 01:03:03.850593 | orchestrator | Thursday 05 February 2026 01:01:10 +0000 (0:00:02.495) 0:02:13.082 ***** 2026-02-05 01:03:03.850600 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:03.850606 | orchestrator | 2026-02-05 01:03:03.850612 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 01:03:03.850618 | orchestrator | Thursday 05 February 2026 01:01:48 +0000 (0:00:38.061) 0:02:51.144 ***** 2026-02-05 01:03:03.850624 | orchestrator | 2026-02-05 01:03:03.850631 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 01:03:03.850642 | orchestrator | Thursday 05 February 2026 01:01:48 +0000 (0:00:00.055) 0:02:51.200 ***** 2026-02-05 01:03:03.850647 | orchestrator | 2026-02-05 01:03:03.850653 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 01:03:03.850659 | orchestrator | Thursday 05 February 2026 01:01:48 +0000 (0:00:00.055) 0:02:51.255 ***** 2026-02-05 01:03:03.850665 | orchestrator | 2026-02-05 01:03:03.850671 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 01:03:03.850676 | orchestrator | Thursday 05 February 2026 01:01:49 +0000 (0:00:00.158) 0:02:51.413 ***** 2026-02-05 01:03:03.850682 | orchestrator | 2026-02-05 01:03:03.850687 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 01:03:03.850692 | orchestrator | Thursday 05 February 2026 01:01:49 +0000 (0:00:00.049) 0:02:51.462 ***** 2026-02-05 01:03:03.850698 | orchestrator | 2026-02-05 01:03:03.850703 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 01:03:03.850709 | orchestrator | Thursday 05 February 2026 01:01:49 +0000 (0:00:00.096) 0:02:51.558 ***** 2026-02-05 01:03:03.850715 | orchestrator | 2026-02-05 01:03:03.850720 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-05 01:03:03.850726 | orchestrator | Thursday 05 February 2026 01:01:49 +0000 (0:00:00.061) 0:02:51.620 ***** 2026-02-05 01:03:03.850732 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:03.850737 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:03:03.850742 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:03:03.850748 | orchestrator | 2026-02-05 01:03:03.850754 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-05 01:03:03.850759 | orchestrator | Thursday 05 February 2026 01:02:18 +0000 (0:00:29.660) 0:03:21.281 ***** 2026-02-05 01:03:03.850764 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:03:03.850771 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:03:03.850779 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:03:03.850786 | orchestrator | 2026-02-05 01:03:03.850795 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:03:03.850802 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 01:03:03.850808 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-05 01:03:03.850814 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-05 01:03:03.850819 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 01:03:03.850826 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 01:03:03.850833 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 01:03:03.850839 | orchestrator | 2026-02-05 01:03:03.850845 | orchestrator | 2026-02-05 01:03:03.850851 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:03:03.850857 | orchestrator | Thursday 05 February 2026 01:03:02 +0000 (0:00:43.297) 0:04:04.578 ***** 2026-02-05 01:03:03.850863 | orchestrator | =============================================================================== 2026-02-05 01:03:03.850871 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 43.30s 2026-02-05 01:03:03.850878 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 38.06s 2026-02-05 01:03:03.850884 | orchestrator | neutron : Restart neutron-server container ----------------------------- 29.66s 2026-02-05 01:03:03.850889 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.44s 2026-02-05 01:03:03.850900 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.97s 2026-02-05 01:03:03.850905 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.46s 2026-02-05 01:03:03.850911 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.02s 2026-02-05 01:03:03.850917 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.30s 2026-02-05 01:03:03.850927 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.22s 2026-02-05 01:03:03.850933 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 4.01s 2026-02-05 01:03:03.850940 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.84s 2026-02-05 01:03:03.850945 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.78s 2026-02-05 01:03:03.850951 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.66s 2026-02-05 01:03:03.850956 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.58s 2026-02-05 01:03:03.850962 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.55s 2026-02-05 01:03:03.850968 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 2.97s 2026-02-05 01:03:03.850974 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.96s 2026-02-05 01:03:03.850980 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 2.86s 2026-02-05 01:03:03.850986 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.85s 2026-02-05 01:03:03.850992 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 2.72s 2026-02-05 01:03:03.850998 | orchestrator | 2026-02-05 01:03:03 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:03.851005 | orchestrator | 2026-02-05 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:06.869923 | orchestrator | 2026-02-05 01:03:06 | INFO  | Task f0578eb4-fc24-4682-a1f3-87b8a688ec2e is in state STARTED 2026-02-05 01:03:06.872052 | orchestrator | 2026-02-05 01:03:06 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:06.872093 | orchestrator | 2026-02-05 01:03:06 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:06.872098 | orchestrator | 2026-02-05 01:03:06 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:06.872113 | orchestrator | 2026-02-05 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:09.898587 | orchestrator | 2026-02-05 01:03:09 | INFO  | Task f0578eb4-fc24-4682-a1f3-87b8a688ec2e is in state STARTED 2026-02-05 01:03:09.900116 | orchestrator | 2026-02-05 01:03:09 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:09.901794 | orchestrator | 2026-02-05 01:03:09 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:09.902492 | orchestrator | 2026-02-05 01:03:09 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:09.902919 | orchestrator | 2026-02-05 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:12.941315 | orchestrator | 2026-02-05 01:03:12 | INFO  | Task f0578eb4-fc24-4682-a1f3-87b8a688ec2e is in state STARTED 2026-02-05 01:03:12.945043 | orchestrator | 2026-02-05 01:03:12 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:12.946833 | orchestrator | 2026-02-05 01:03:12 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:12.948443 | orchestrator | 2026-02-05 01:03:12 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:12.948491 | orchestrator | 2026-02-05 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:15.991308 | orchestrator | 2026-02-05 01:03:15 | INFO  | Task f0578eb4-fc24-4682-a1f3-87b8a688ec2e is in state STARTED 2026-02-05 01:03:15.995311 | orchestrator | 2026-02-05 01:03:15 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:15.997895 | orchestrator | 2026-02-05 01:03:15 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:15.999294 | orchestrator | 2026-02-05 01:03:15 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:15.999337 | orchestrator | 2026-02-05 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:19.055631 | orchestrator | 2026-02-05 01:03:19 | INFO  | Task f0578eb4-fc24-4682-a1f3-87b8a688ec2e is in state STARTED 2026-02-05 01:03:19.056511 | orchestrator | 2026-02-05 01:03:19 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:19.057806 | orchestrator | 2026-02-05 01:03:19 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:19.059163 | orchestrator | 2026-02-05 01:03:19 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:19.059226 | orchestrator | 2026-02-05 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:22.111687 | orchestrator | 2026-02-05 01:03:22 | INFO  | Task f0578eb4-fc24-4682-a1f3-87b8a688ec2e is in state STARTED 2026-02-05 01:03:22.113094 | orchestrator | 2026-02-05 01:03:22 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:22.114057 | orchestrator | 2026-02-05 01:03:22 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:22.115089 | orchestrator | 2026-02-05 01:03:22 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:22.115143 | orchestrator | 2026-02-05 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:25.144386 | orchestrator | 2026-02-05 01:03:25 | INFO  | Task f0578eb4-fc24-4682-a1f3-87b8a688ec2e is in state STARTED 2026-02-05 01:03:25.144639 | orchestrator | 2026-02-05 01:03:25 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:25.145333 | orchestrator | 2026-02-05 01:03:25 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:25.146294 | orchestrator | 2026-02-05 01:03:25 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:25.146332 | orchestrator | 2026-02-05 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:28.191924 | orchestrator | 2026-02-05 01:03:28 | INFO  | Task f0578eb4-fc24-4682-a1f3-87b8a688ec2e is in state STARTED 2026-02-05 01:03:28.193621 | orchestrator | 2026-02-05 01:03:28 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:28.195257 | orchestrator | 2026-02-05 01:03:28 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:28.196913 | orchestrator | 2026-02-05 01:03:28 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:28.197214 | orchestrator | 2026-02-05 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:31.232782 | orchestrator | 2026-02-05 01:03:31 | INFO  | Task f0578eb4-fc24-4682-a1f3-87b8a688ec2e is in state STARTED 2026-02-05 01:03:31.234066 | orchestrator | 2026-02-05 01:03:31 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:31.236641 | orchestrator | 2026-02-05 01:03:31 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:31.238958 | orchestrator | 2026-02-05 01:03:31 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:31.239571 | orchestrator | 2026-02-05 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:34.287503 | orchestrator | 2026-02-05 01:03:34 | INFO  | Task f0578eb4-fc24-4682-a1f3-87b8a688ec2e is in state STARTED 2026-02-05 01:03:34.290678 | orchestrator | 2026-02-05 01:03:34 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:34.291631 | orchestrator | 2026-02-05 01:03:34 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:34.292088 | orchestrator | 2026-02-05 01:03:34 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:34.292316 | orchestrator | 2026-02-05 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:37.321734 | orchestrator | 2026-02-05 01:03:37 | INFO  | Task f0578eb4-fc24-4682-a1f3-87b8a688ec2e is in state SUCCESS 2026-02-05 01:03:37.323146 | orchestrator | 2026-02-05 01:03:37 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:37.324195 | orchestrator | 2026-02-05 01:03:37 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:37.325583 | orchestrator | 2026-02-05 01:03:37 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:37.325607 | orchestrator | 2026-02-05 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:40.366290 | orchestrator | 2026-02-05 01:03:40 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:03:40.366738 | orchestrator | 2026-02-05 01:03:40 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:40.368201 | orchestrator | 2026-02-05 01:03:40 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:40.370309 | orchestrator | 2026-02-05 01:03:40 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:40.370808 | orchestrator | 2026-02-05 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:43.397874 | orchestrator | 2026-02-05 01:03:43 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:03:43.398230 | orchestrator | 2026-02-05 01:03:43 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:43.398751 | orchestrator | 2026-02-05 01:03:43 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:43.400522 | orchestrator | 2026-02-05 01:03:43 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:43.400658 | orchestrator | 2026-02-05 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:46.436654 | orchestrator | 2026-02-05 01:03:46 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:03:46.439018 | orchestrator | 2026-02-05 01:03:46 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:46.441206 | orchestrator | 2026-02-05 01:03:46 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:46.445539 | orchestrator | 2026-02-05 01:03:46 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:46.446114 | orchestrator | 2026-02-05 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:49.505079 | orchestrator | 2026-02-05 01:03:49 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:03:49.505571 | orchestrator | 2026-02-05 01:03:49 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state STARTED 2026-02-05 01:03:49.507357 | orchestrator | 2026-02-05 01:03:49 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:49.508382 | orchestrator | 2026-02-05 01:03:49 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:49.508414 | orchestrator | 2026-02-05 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:52.534951 | orchestrator | 2026-02-05 01:03:52 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:03:52.538672 | orchestrator | 2026-02-05 01:03:52 | INFO  | Task dfaf3b52-e8ec-495d-9ac4-d8f12391d4db is in state SUCCESS 2026-02-05 01:03:52.538725 | orchestrator | 2026-02-05 01:03:52.538733 | orchestrator | 2026-02-05 01:03:52.538739 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:03:52.538745 | orchestrator | 2026-02-05 01:03:52.538751 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:03:52.538757 | orchestrator | Thursday 05 February 2026 01:03:08 +0000 (0:00:00.293) 0:00:00.293 ***** 2026-02-05 01:03:52.538763 | orchestrator | ok: [testbed-manager] 2026-02-05 01:03:52.538767 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:03:52.538770 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:03:52.538783 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:03:52.538786 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:03:52.538789 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:03:52.538795 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:03:52.538800 | orchestrator | 2026-02-05 01:03:52.538805 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:03:52.538810 | orchestrator | Thursday 05 February 2026 01:03:08 +0000 (0:00:00.651) 0:00:00.944 ***** 2026-02-05 01:03:52.538816 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-05 01:03:52.538821 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-05 01:03:52.538827 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-05 01:03:52.538832 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-05 01:03:52.538837 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-05 01:03:52.538842 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-05 01:03:52.538848 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-05 01:03:52.538853 | orchestrator | 2026-02-05 01:03:52.538858 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-05 01:03:52.538863 | orchestrator | 2026-02-05 01:03:52.538867 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-05 01:03:52.538870 | orchestrator | Thursday 05 February 2026 01:03:09 +0000 (0:00:00.628) 0:00:01.573 ***** 2026-02-05 01:03:52.538874 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:03:52.538878 | orchestrator | 2026-02-05 01:03:52.538881 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-05 01:03:52.538886 | orchestrator | Thursday 05 February 2026 01:03:10 +0000 (0:00:01.195) 0:00:02.768 ***** 2026-02-05 01:03:52.538892 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-05 01:03:52.538897 | orchestrator | 2026-02-05 01:03:52.538902 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-05 01:03:52.538907 | orchestrator | Thursday 05 February 2026 01:03:13 +0000 (0:00:02.938) 0:00:05.707 ***** 2026-02-05 01:03:52.538913 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-05 01:03:52.538920 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-05 01:03:52.538939 | orchestrator | 2026-02-05 01:03:52.538944 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-05 01:03:52.538949 | orchestrator | Thursday 05 February 2026 01:03:19 +0000 (0:00:06.194) 0:00:11.901 ***** 2026-02-05 01:03:52.538954 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-05 01:03:52.538960 | orchestrator | 2026-02-05 01:03:52.538965 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-05 01:03:52.538970 | orchestrator | Thursday 05 February 2026 01:03:23 +0000 (0:00:03.435) 0:00:15.336 ***** 2026-02-05 01:03:52.538975 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:03:52.538980 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-05 01:03:52.538986 | orchestrator | 2026-02-05 01:03:52.538990 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-05 01:03:52.538996 | orchestrator | Thursday 05 February 2026 01:03:26 +0000 (0:00:03.388) 0:00:18.725 ***** 2026-02-05 01:03:52.539001 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-05 01:03:52.539007 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-05 01:03:52.539012 | orchestrator | 2026-02-05 01:03:52.539017 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-05 01:03:52.539022 | orchestrator | Thursday 05 February 2026 01:03:32 +0000 (0:00:05.526) 0:00:24.251 ***** 2026-02-05 01:03:52.539027 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-05 01:03:52.539032 | orchestrator | 2026-02-05 01:03:52.539037 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:03:52.539043 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:03:52.539048 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:03:52.539054 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:03:52.539059 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:03:52.539132 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:03:52.539140 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:03:52.539146 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:03:52.539151 | orchestrator | 2026-02-05 01:03:52.539157 | orchestrator | 2026-02-05 01:03:52.539162 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:03:52.539167 | orchestrator | Thursday 05 February 2026 01:03:36 +0000 (0:00:04.380) 0:00:28.631 ***** 2026-02-05 01:03:52.539206 | orchestrator | =============================================================================== 2026-02-05 01:03:52.539214 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.19s 2026-02-05 01:03:52.539219 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.53s 2026-02-05 01:03:52.539225 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.38s 2026-02-05 01:03:52.539230 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.44s 2026-02-05 01:03:52.539236 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.39s 2026-02-05 01:03:52.539241 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 2.94s 2026-02-05 01:03:52.539247 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.20s 2026-02-05 01:03:52.539252 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2026-02-05 01:03:52.539263 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-02-05 01:03:52.539269 | orchestrator | 2026-02-05 01:03:52.539416 | orchestrator | 2026-02-05 01:03:52.539425 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:03:52.539431 | orchestrator | 2026-02-05 01:03:52.539436 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:03:52.539441 | orchestrator | Thursday 05 February 2026 01:01:59 +0000 (0:00:00.229) 0:00:00.229 ***** 2026-02-05 01:03:52.539446 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:03:52.539451 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:03:52.539456 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:03:52.539462 | orchestrator | 2026-02-05 01:03:52.539467 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:03:52.539472 | orchestrator | Thursday 05 February 2026 01:01:59 +0000 (0:00:00.265) 0:00:00.495 ***** 2026-02-05 01:03:52.539477 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-05 01:03:52.539482 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-05 01:03:52.539487 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-05 01:03:52.539569 | orchestrator | 2026-02-05 01:03:52.539574 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-05 01:03:52.539596 | orchestrator | 2026-02-05 01:03:52.539602 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-05 01:03:52.539608 | orchestrator | Thursday 05 February 2026 01:02:00 +0000 (0:00:00.369) 0:00:00.865 ***** 2026-02-05 01:03:52.539632 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:03:52.539638 | orchestrator | 2026-02-05 01:03:52.539643 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-05 01:03:52.539648 | orchestrator | Thursday 05 February 2026 01:02:00 +0000 (0:00:00.482) 0:00:01.347 ***** 2026-02-05 01:03:52.539654 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-05 01:03:52.539659 | orchestrator | 2026-02-05 01:03:52.539665 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-05 01:03:52.539670 | orchestrator | Thursday 05 February 2026 01:02:04 +0000 (0:00:03.562) 0:00:04.910 ***** 2026-02-05 01:03:52.539675 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-05 01:03:52.539682 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-05 01:03:52.539687 | orchestrator | 2026-02-05 01:03:52.539693 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-05 01:03:52.539698 | orchestrator | Thursday 05 February 2026 01:02:10 +0000 (0:00:06.297) 0:00:11.207 ***** 2026-02-05 01:03:52.539703 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:03:52.539709 | orchestrator | 2026-02-05 01:03:52.539714 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-05 01:03:52.539730 | orchestrator | Thursday 05 February 2026 01:02:14 +0000 (0:00:03.433) 0:00:14.640 ***** 2026-02-05 01:03:52.539749 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:03:52.539755 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-05 01:03:52.539760 | orchestrator | 2026-02-05 01:03:52.539766 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-05 01:03:52.539771 | orchestrator | Thursday 05 February 2026 01:02:18 +0000 (0:00:04.213) 0:00:18.854 ***** 2026-02-05 01:03:52.539776 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:03:52.539781 | orchestrator | 2026-02-05 01:03:52.539784 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-05 01:03:52.539787 | orchestrator | Thursday 05 February 2026 01:02:21 +0000 (0:00:03.543) 0:00:22.398 ***** 2026-02-05 01:03:52.539790 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-05 01:03:52.539798 | orchestrator | 2026-02-05 01:03:52.539801 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-05 01:03:52.539804 | orchestrator | Thursday 05 February 2026 01:02:25 +0000 (0:00:04.066) 0:00:26.465 ***** 2026-02-05 01:03:52.539807 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:52.539811 | orchestrator | 2026-02-05 01:03:52.539814 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-05 01:03:52.539817 | orchestrator | Thursday 05 February 2026 01:02:29 +0000 (0:00:03.308) 0:00:29.774 ***** 2026-02-05 01:03:52.539820 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:52.539823 | orchestrator | 2026-02-05 01:03:52.539826 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-05 01:03:52.539829 | orchestrator | Thursday 05 February 2026 01:02:33 +0000 (0:00:03.863) 0:00:33.638 ***** 2026-02-05 01:03:52.539832 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:52.539835 | orchestrator | 2026-02-05 01:03:52.539838 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-05 01:03:52.539844 | orchestrator | Thursday 05 February 2026 01:02:36 +0000 (0:00:03.427) 0:00:37.065 ***** 2026-02-05 01:03:52.539854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.539861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.539864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.539870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.539876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.539882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.539886 | orchestrator | 2026-02-05 01:03:52.539889 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-05 01:03:52.539892 | orchestrator | Thursday 05 February 2026 01:02:38 +0000 (0:00:01.651) 0:00:38.717 ***** 2026-02-05 01:03:52.539895 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:52.539899 | orchestrator | 2026-02-05 01:03:52.539902 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-05 01:03:52.539905 | orchestrator | Thursday 05 February 2026 01:02:38 +0000 (0:00:00.117) 0:00:38.834 ***** 2026-02-05 01:03:52.539908 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:52.539911 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:52.539914 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:52.539917 | orchestrator | 2026-02-05 01:03:52.539920 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-05 01:03:52.539924 | orchestrator | Thursday 05 February 2026 01:02:38 +0000 (0:00:00.371) 0:00:39.205 ***** 2026-02-05 01:03:52.539927 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:03:52.539930 | orchestrator | 2026-02-05 01:03:52.539933 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-05 01:03:52.539936 | orchestrator | Thursday 05 February 2026 01:02:39 +0000 (0:00:00.761) 0:00:39.967 ***** 2026-02-05 01:03:52.539939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.539945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.539950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.539958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.539962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.539965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.539971 | orchestrator | 2026-02-05 01:03:52.539974 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-05 01:03:52.539977 | orchestrator | Thursday 05 February 2026 01:02:41 +0000 (0:00:02.268) 0:00:42.235 ***** 2026-02-05 01:03:52.539980 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:03:52.539983 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:03:52.539986 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:03:52.539989 | orchestrator | 2026-02-05 01:03:52.539993 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-05 01:03:52.539996 | orchestrator | Thursday 05 February 2026 01:02:41 +0000 (0:00:00.274) 0:00:42.510 ***** 2026-02-05 01:03:52.539999 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:03:52.540002 | orchestrator | 2026-02-05 01:03:52.540005 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-05 01:03:52.540008 | orchestrator | Thursday 05 February 2026 01:02:42 +0000 (0:00:00.593) 0:00:43.103 ***** 2026-02-05 01:03:52.540013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.540020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.540023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.540029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.540032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.540037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.540040 | orchestrator | 2026-02-05 01:03:52.540043 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-05 01:03:52.540046 | orchestrator | Thursday 05 February 2026 01:02:44 +0000 (0:00:02.215) 0:00:45.318 ***** 2026-02-05 01:03:52.540052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:03:52.540055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:03:52.540063 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:52.540067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:03:52.540070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:03:52.540073 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:52.540078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:03:52.540084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:03:52.540090 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:52.540093 | orchestrator | 2026-02-05 01:03:52.540106 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-05 01:03:52.540109 | orchestrator | Thursday 05 February 2026 01:02:45 +0000 (0:00:00.589) 0:00:45.908 ***** 2026-02-05 01:03:52.540113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:03:52.540116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:03:52.540119 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:52.540124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:03:52.540130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:03:52.540133 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:52.540139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:03:52.540142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:03:52.540145 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:52.540148 | orchestrator | 2026-02-05 01:03:52.540152 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-05 01:03:52.540155 | orchestrator | Thursday 05 February 2026 01:02:46 +0000 (0:00:01.184) 0:00:47.092 ***** 2026-02-05 01:03:52.540158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.540163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.540258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.540265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.540269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.540272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.540275 | orchestrator | 2026-02-05 01:03:52.540278 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-05 01:03:52.540282 | orchestrator | Thursday 05 February 2026 01:02:48 +0000 (0:00:02.293) 0:00:49.386 ***** 2026-02-05 01:03:52.540287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.540295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.540299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.540302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.540305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.540310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.540315 | orchestrator | 2026-02-05 01:03:52.540318 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-05 01:03:52.540323 | orchestrator | Thursday 05 February 2026 01:02:54 +0000 (0:00:06.046) 0:00:55.432 ***** 2026-02-05 01:03:52.540326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:03:52.540330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:03:52.540333 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:52.540336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:03:52.540341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:03:52.540349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:03:52.540352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:03:52.540355 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:52.540359 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:52.540362 | orchestrator | 2026-02-05 01:03:52.540365 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-05 01:03:52.540368 | orchestrator | Thursday 05 February 2026 01:02:55 +0000 (0:00:00.443) 0:00:55.876 ***** 2026-02-05 01:03:52.540371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.540375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.540380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:03:52.540387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.540390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.540393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:52.540396 | orchestrator | 2026-02-05 01:03:52.540400 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-05 01:03:52.540403 | orchestrator | Thursday 05 February 2026 01:02:57 +0000 (0:00:02.169) 0:00:58.045 ***** 2026-02-05 01:03:52.540406 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:52.540409 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:52.540412 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:52.540415 | orchestrator | 2026-02-05 01:03:52.540418 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-05 01:03:52.540421 | orchestrator | Thursday 05 February 2026 01:02:58 +0000 (0:00:00.582) 0:00:58.627 ***** 2026-02-05 01:03:52.540425 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:52.540428 | orchestrator | 2026-02-05 01:03:52.540431 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-05 01:03:52.540437 | orchestrator | Thursday 05 February 2026 01:03:00 +0000 (0:00:02.247) 0:01:00.875 ***** 2026-02-05 01:03:52.540440 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:52.540443 | orchestrator | 2026-02-05 01:03:52.540447 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-05 01:03:52.540452 | orchestrator | Thursday 05 February 2026 01:03:02 +0000 (0:00:02.377) 0:01:03.252 ***** 2026-02-05 01:03:52.540457 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:52.540462 | orchestrator | 2026-02-05 01:03:52.540468 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-05 01:03:52.540473 | orchestrator | Thursday 05 February 2026 01:03:18 +0000 (0:00:15.276) 0:01:18.529 ***** 2026-02-05 01:03:52.540478 | orchestrator | 2026-02-05 01:03:52.540483 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-05 01:03:52.540491 | orchestrator | Thursday 05 February 2026 01:03:18 +0000 (0:00:00.093) 0:01:18.622 ***** 2026-02-05 01:03:52.540497 | orchestrator | 2026-02-05 01:03:52.540520 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-05 01:03:52.540525 | orchestrator | Thursday 05 February 2026 01:03:18 +0000 (0:00:00.066) 0:01:18.689 ***** 2026-02-05 01:03:52.540531 | orchestrator | 2026-02-05 01:03:52.540535 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-05 01:03:52.540540 | orchestrator | Thursday 05 February 2026 01:03:18 +0000 (0:00:00.068) 0:01:18.758 ***** 2026-02-05 01:03:52.540546 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:52.540551 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:03:52.540556 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:03:52.540561 | orchestrator | 2026-02-05 01:03:52.540566 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-05 01:03:52.540571 | orchestrator | Thursday 05 February 2026 01:03:34 +0000 (0:00:15.854) 0:01:34.613 ***** 2026-02-05 01:03:52.540576 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:52.540581 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:03:52.540586 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:03:52.540591 | orchestrator | 2026-02-05 01:03:52.540599 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:03:52.540605 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:03:52.540611 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 01:03:52.540616 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 01:03:52.540621 | orchestrator | 2026-02-05 01:03:52.540626 | orchestrator | 2026-02-05 01:03:52.540631 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:03:52.540636 | orchestrator | Thursday 05 February 2026 01:03:49 +0000 (0:00:15.495) 0:01:50.108 ***** 2026-02-05 01:03:52.540641 | orchestrator | =============================================================================== 2026-02-05 01:03:52.540647 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.85s 2026-02-05 01:03:52.540652 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.50s 2026-02-05 01:03:52.540658 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.28s 2026-02-05 01:03:52.540663 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.30s 2026-02-05 01:03:52.540668 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.05s 2026-02-05 01:03:52.540673 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.21s 2026-02-05 01:03:52.540679 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.07s 2026-02-05 01:03:52.540684 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.86s 2026-02-05 01:03:52.540692 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.56s 2026-02-05 01:03:52.540697 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.54s 2026-02-05 01:03:52.540702 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.43s 2026-02-05 01:03:52.540707 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.43s 2026-02-05 01:03:52.540713 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.31s 2026-02-05 01:03:52.540718 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.38s 2026-02-05 01:03:52.540723 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.29s 2026-02-05 01:03:52.540728 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.27s 2026-02-05 01:03:52.540734 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.25s 2026-02-05 01:03:52.540739 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.22s 2026-02-05 01:03:52.540744 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.17s 2026-02-05 01:03:52.540749 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.65s 2026-02-05 01:03:52.540755 | orchestrator | 2026-02-05 01:03:52 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:03:52.541855 | orchestrator | 2026-02-05 01:03:52 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:52.543743 | orchestrator | 2026-02-05 01:03:52 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:52.543779 | orchestrator | 2026-02-05 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:55.577814 | orchestrator | 2026-02-05 01:03:55 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:03:55.578136 | orchestrator | 2026-02-05 01:03:55 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:03:55.580141 | orchestrator | 2026-02-05 01:03:55 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:55.581860 | orchestrator | 2026-02-05 01:03:55 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:55.582194 | orchestrator | 2026-02-05 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:58.636605 | orchestrator | 2026-02-05 01:03:58 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:03:58.638405 | orchestrator | 2026-02-05 01:03:58 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:03:58.640084 | orchestrator | 2026-02-05 01:03:58 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:03:58.642812 | orchestrator | 2026-02-05 01:03:58 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:03:58.642853 | orchestrator | 2026-02-05 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:01.681572 | orchestrator | 2026-02-05 01:04:01 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:01.682426 | orchestrator | 2026-02-05 01:04:01 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:01.683307 | orchestrator | 2026-02-05 01:04:01 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:01.684745 | orchestrator | 2026-02-05 01:04:01 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:01.684889 | orchestrator | 2026-02-05 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:04.731398 | orchestrator | 2026-02-05 01:04:04 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:04.735027 | orchestrator | 2026-02-05 01:04:04 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:04.737217 | orchestrator | 2026-02-05 01:04:04 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:04.739978 | orchestrator | 2026-02-05 01:04:04 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:04.740145 | orchestrator | 2026-02-05 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:07.777029 | orchestrator | 2026-02-05 01:04:07 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:07.778245 | orchestrator | 2026-02-05 01:04:07 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:07.779753 | orchestrator | 2026-02-05 01:04:07 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:07.781007 | orchestrator | 2026-02-05 01:04:07 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:07.781030 | orchestrator | 2026-02-05 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:10.810511 | orchestrator | 2026-02-05 01:04:10 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:10.810575 | orchestrator | 2026-02-05 01:04:10 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:10.811054 | orchestrator | 2026-02-05 01:04:10 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:10.811554 | orchestrator | 2026-02-05 01:04:10 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:10.811575 | orchestrator | 2026-02-05 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:13.844258 | orchestrator | 2026-02-05 01:04:13 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:13.846263 | orchestrator | 2026-02-05 01:04:13 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:13.847481 | orchestrator | 2026-02-05 01:04:13 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:13.849102 | orchestrator | 2026-02-05 01:04:13 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:13.849154 | orchestrator | 2026-02-05 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:16.880401 | orchestrator | 2026-02-05 01:04:16 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:16.880700 | orchestrator | 2026-02-05 01:04:16 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:16.882721 | orchestrator | 2026-02-05 01:04:16 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:16.884627 | orchestrator | 2026-02-05 01:04:16 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:16.884700 | orchestrator | 2026-02-05 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:19.947157 | orchestrator | 2026-02-05 01:04:19 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:19.948813 | orchestrator | 2026-02-05 01:04:19 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:19.949345 | orchestrator | 2026-02-05 01:04:19 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:19.950138 | orchestrator | 2026-02-05 01:04:19 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:19.950173 | orchestrator | 2026-02-05 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:22.991265 | orchestrator | 2026-02-05 01:04:22 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:22.991309 | orchestrator | 2026-02-05 01:04:22 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:22.991314 | orchestrator | 2026-02-05 01:04:22 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:22.991318 | orchestrator | 2026-02-05 01:04:22 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:22.991321 | orchestrator | 2026-02-05 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:26.001428 | orchestrator | 2026-02-05 01:04:26 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:26.001924 | orchestrator | 2026-02-05 01:04:26 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:26.002536 | orchestrator | 2026-02-05 01:04:26 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:26.003366 | orchestrator | 2026-02-05 01:04:26 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:26.003454 | orchestrator | 2026-02-05 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:29.032999 | orchestrator | 2026-02-05 01:04:29 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:29.033834 | orchestrator | 2026-02-05 01:04:29 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:29.034410 | orchestrator | 2026-02-05 01:04:29 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:29.035081 | orchestrator | 2026-02-05 01:04:29 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:29.035122 | orchestrator | 2026-02-05 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:32.097231 | orchestrator | 2026-02-05 01:04:32 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:32.097888 | orchestrator | 2026-02-05 01:04:32 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:32.098684 | orchestrator | 2026-02-05 01:04:32 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:32.099731 | orchestrator | 2026-02-05 01:04:32 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:32.099770 | orchestrator | 2026-02-05 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:35.131922 | orchestrator | 2026-02-05 01:04:35 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:35.134356 | orchestrator | 2026-02-05 01:04:35 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:35.134797 | orchestrator | 2026-02-05 01:04:35 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:35.135244 | orchestrator | 2026-02-05 01:04:35 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:35.135293 | orchestrator | 2026-02-05 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:38.169735 | orchestrator | 2026-02-05 01:04:38 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:38.171319 | orchestrator | 2026-02-05 01:04:38 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:38.173243 | orchestrator | 2026-02-05 01:04:38 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:38.175102 | orchestrator | 2026-02-05 01:04:38 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:38.175569 | orchestrator | 2026-02-05 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:41.204979 | orchestrator | 2026-02-05 01:04:41 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:41.205093 | orchestrator | 2026-02-05 01:04:41 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:41.205824 | orchestrator | 2026-02-05 01:04:41 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:41.206426 | orchestrator | 2026-02-05 01:04:41 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:41.206463 | orchestrator | 2026-02-05 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:44.243819 | orchestrator | 2026-02-05 01:04:44 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:44.243958 | orchestrator | 2026-02-05 01:04:44 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:44.244456 | orchestrator | 2026-02-05 01:04:44 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:44.246160 | orchestrator | 2026-02-05 01:04:44 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:44.246199 | orchestrator | 2026-02-05 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:47.277691 | orchestrator | 2026-02-05 01:04:47 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:47.277774 | orchestrator | 2026-02-05 01:04:47 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:47.279509 | orchestrator | 2026-02-05 01:04:47 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:47.282047 | orchestrator | 2026-02-05 01:04:47 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:47.282115 | orchestrator | 2026-02-05 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:50.401685 | orchestrator | 2026-02-05 01:04:50 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:50.402417 | orchestrator | 2026-02-05 01:04:50 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:50.405181 | orchestrator | 2026-02-05 01:04:50 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:50.406231 | orchestrator | 2026-02-05 01:04:50 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:50.406270 | orchestrator | 2026-02-05 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:53.436204 | orchestrator | 2026-02-05 01:04:53 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:53.436783 | orchestrator | 2026-02-05 01:04:53 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:53.437302 | orchestrator | 2026-02-05 01:04:53 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:53.437917 | orchestrator | 2026-02-05 01:04:53 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:53.438143 | orchestrator | 2026-02-05 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:56.466699 | orchestrator | 2026-02-05 01:04:56 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:56.467344 | orchestrator | 2026-02-05 01:04:56 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:56.467873 | orchestrator | 2026-02-05 01:04:56 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:56.469262 | orchestrator | 2026-02-05 01:04:56 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:56.469340 | orchestrator | 2026-02-05 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:59.495873 | orchestrator | 2026-02-05 01:04:59 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:04:59.495981 | orchestrator | 2026-02-05 01:04:59 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:04:59.497406 | orchestrator | 2026-02-05 01:04:59 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:04:59.498908 | orchestrator | 2026-02-05 01:04:59 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:04:59.498956 | orchestrator | 2026-02-05 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:02.544176 | orchestrator | 2026-02-05 01:05:02 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:02.544311 | orchestrator | 2026-02-05 01:05:02 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:02.545041 | orchestrator | 2026-02-05 01:05:02 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:05:02.545622 | orchestrator | 2026-02-05 01:05:02 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:05:02.545675 | orchestrator | 2026-02-05 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:05.684550 | orchestrator | 2026-02-05 01:05:05 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:05.684643 | orchestrator | 2026-02-05 01:05:05 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:05.685013 | orchestrator | 2026-02-05 01:05:05 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:05:05.685545 | orchestrator | 2026-02-05 01:05:05 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:05:05.685584 | orchestrator | 2026-02-05 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:08.707343 | orchestrator | 2026-02-05 01:05:08 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:08.707586 | orchestrator | 2026-02-05 01:05:08 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:08.708610 | orchestrator | 2026-02-05 01:05:08 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:05:08.708873 | orchestrator | 2026-02-05 01:05:08 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:05:08.709914 | orchestrator | 2026-02-05 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:11.851502 | orchestrator | 2026-02-05 01:05:11 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:11.851606 | orchestrator | 2026-02-05 01:05:11 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:11.851616 | orchestrator | 2026-02-05 01:05:11 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:05:11.851623 | orchestrator | 2026-02-05 01:05:11 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:05:11.851658 | orchestrator | 2026-02-05 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:14.788561 | orchestrator | 2026-02-05 01:05:14 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:14.790086 | orchestrator | 2026-02-05 01:05:14 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:14.791712 | orchestrator | 2026-02-05 01:05:14 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state STARTED 2026-02-05 01:05:14.793396 | orchestrator | 2026-02-05 01:05:14 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:05:14.793428 | orchestrator | 2026-02-05 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:17.831364 | orchestrator | 2026-02-05 01:05:17 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:17.832367 | orchestrator | 2026-02-05 01:05:17 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:17.832975 | orchestrator | 2026-02-05 01:05:17 | INFO  | Task 6ae469bd-7d06-4d2c-91d2-ce08d94fa396 is in state SUCCESS 2026-02-05 01:05:17.833537 | orchestrator | 2026-02-05 01:05:17 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:05:17.834668 | orchestrator | 2026-02-05 01:05:17 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:17.834698 | orchestrator | 2026-02-05 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:20.867811 | orchestrator | 2026-02-05 01:05:20 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:20.869296 | orchestrator | 2026-02-05 01:05:20 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:20.871067 | orchestrator | 2026-02-05 01:05:20 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:05:20.872280 | orchestrator | 2026-02-05 01:05:20 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:20.872320 | orchestrator | 2026-02-05 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:23.897807 | orchestrator | 2026-02-05 01:05:23 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:23.898356 | orchestrator | 2026-02-05 01:05:23 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:23.899676 | orchestrator | 2026-02-05 01:05:23 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:05:23.900637 | orchestrator | 2026-02-05 01:05:23 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:23.900701 | orchestrator | 2026-02-05 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:26.953510 | orchestrator | 2026-02-05 01:05:26 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:26.956569 | orchestrator | 2026-02-05 01:05:26 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:26.959414 | orchestrator | 2026-02-05 01:05:26 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:05:26.963117 | orchestrator | 2026-02-05 01:05:26 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:26.964022 | orchestrator | 2026-02-05 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:30.012139 | orchestrator | 2026-02-05 01:05:30 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:30.013688 | orchestrator | 2026-02-05 01:05:30 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:30.016176 | orchestrator | 2026-02-05 01:05:30 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state STARTED 2026-02-05 01:05:30.017612 | orchestrator | 2026-02-05 01:05:30 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:30.019074 | orchestrator | 2026-02-05 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:33.066243 | orchestrator | 2026-02-05 01:05:33 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:33.067758 | orchestrator | 2026-02-05 01:05:33 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:33.069513 | orchestrator | 2026-02-05 01:05:33 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:05:33.073669 | orchestrator | 2026-02-05 01:05:33 | INFO  | Task 0bc41710-56ca-4cc5-9f6f-0d088a59bb5d is in state SUCCESS 2026-02-05 01:05:33.074323 | orchestrator | 2026-02-05 01:05:33.074379 | orchestrator | 2026-02-05 01:05:33.074388 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-02-05 01:05:33.074395 | orchestrator | 2026-02-05 01:05:33.074402 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-02-05 01:05:33.074426 | orchestrator | Thursday 05 February 2026 00:58:57 +0000 (0:00:00.079) 0:00:00.079 ***** 2026-02-05 01:05:33.074488 | orchestrator | changed: [localhost] 2026-02-05 01:05:33.074494 | orchestrator | 2026-02-05 01:05:33.074499 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-02-05 01:05:33.074510 | orchestrator | Thursday 05 February 2026 00:58:58 +0000 (0:00:00.775) 0:00:00.854 ***** 2026-02-05 01:05:33.074514 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-02-05 01:05:33.074527 | orchestrator | 2026-02-05 01:05:33.074531 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-05 01:05:33.074535 | orchestrator | 2026-02-05 01:05:33.074539 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-05 01:05:33.074548 | orchestrator | 2026-02-05 01:05:33.074552 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-05 01:05:33.074556 | orchestrator | 2026-02-05 01:05:33.074563 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-05 01:05:33.074569 | orchestrator | 2026-02-05 01:05:33.074575 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-05 01:05:33.074581 | orchestrator | 2026-02-05 01:05:33.074587 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-05 01:05:33.074592 | orchestrator | 2026-02-05 01:05:33.074598 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-05 01:05:33.074605 | orchestrator | 2026-02-05 01:05:33.074610 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-05 01:05:33.074617 | orchestrator | changed: [localhost] 2026-02-05 01:05:33.074622 | orchestrator | 2026-02-05 01:05:33.074629 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-02-05 01:05:33.074635 | orchestrator | Thursday 05 February 2026 01:05:04 +0000 (0:06:06.198) 0:06:07.052 ***** 2026-02-05 01:05:33.074705 | orchestrator | changed: [localhost] 2026-02-05 01:05:33.074713 | orchestrator | 2026-02-05 01:05:33.074717 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:05:33.074721 | orchestrator | 2026-02-05 01:05:33.074726 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:05:33.074732 | orchestrator | Thursday 05 February 2026 01:05:15 +0000 (0:00:10.664) 0:06:17.716 ***** 2026-02-05 01:05:33.074738 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:05:33.074744 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:05:33.074754 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:05:33.074788 | orchestrator | 2026-02-05 01:05:33.074795 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:05:33.074802 | orchestrator | Thursday 05 February 2026 01:05:15 +0000 (0:00:00.272) 0:06:17.989 ***** 2026-02-05 01:05:33.074823 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-02-05 01:05:33.074829 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-02-05 01:05:33.074836 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-02-05 01:05:33.074841 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-02-05 01:05:33.074847 | orchestrator | 2026-02-05 01:05:33.074854 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-02-05 01:05:33.074861 | orchestrator | skipping: no hosts matched 2026-02-05 01:05:33.074868 | orchestrator | 2026-02-05 01:05:33.074874 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:05:33.074881 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:05:33.074890 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:05:33.074899 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:05:33.074906 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:05:33.074913 | orchestrator | 2026-02-05 01:05:33.074919 | orchestrator | 2026-02-05 01:05:33.074925 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:05:33.075028 | orchestrator | Thursday 05 February 2026 01:05:16 +0000 (0:00:00.515) 0:06:18.504 ***** 2026-02-05 01:05:33.075033 | orchestrator | =============================================================================== 2026-02-05 01:05:33.075037 | orchestrator | Download ironic-agent initramfs --------------------------------------- 366.20s 2026-02-05 01:05:33.075042 | orchestrator | Download ironic-agent kernel ------------------------------------------- 10.66s 2026-02-05 01:05:33.075047 | orchestrator | Ensure the destination directory exists --------------------------------- 0.78s 2026-02-05 01:05:33.075052 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2026-02-05 01:05:33.075056 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-02-05 01:05:33.075060 | orchestrator | 2026-02-05 01:05:33.075332 | orchestrator | 2026-02-05 01:05:33.075353 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:05:33.075360 | orchestrator | 2026-02-05 01:05:33.075365 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:05:33.075371 | orchestrator | Thursday 05 February 2026 01:02:33 +0000 (0:00:00.257) 0:00:00.257 ***** 2026-02-05 01:05:33.075377 | orchestrator | ok: [testbed-manager] 2026-02-05 01:05:33.075384 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:05:33.075390 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:05:33.075396 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:05:33.075425 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:05:33.075516 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:05:33.075523 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:05:33.075527 | orchestrator | 2026-02-05 01:05:33.075531 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:05:33.075536 | orchestrator | Thursday 05 February 2026 01:02:34 +0000 (0:00:00.672) 0:00:00.930 ***** 2026-02-05 01:05:33.075559 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-05 01:05:33.075565 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-05 01:05:33.075569 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-05 01:05:33.075576 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-05 01:05:33.075597 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-05 01:05:33.075607 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-05 01:05:33.075612 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-05 01:05:33.075618 | orchestrator | 2026-02-05 01:05:33.075625 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-05 01:05:33.075630 | orchestrator | 2026-02-05 01:05:33.075637 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-05 01:05:33.075643 | orchestrator | Thursday 05 February 2026 01:02:34 +0000 (0:00:00.600) 0:00:01.531 ***** 2026-02-05 01:05:33.075649 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:05:33.075657 | orchestrator | 2026-02-05 01:05:33.075664 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-05 01:05:33.075684 | orchestrator | Thursday 05 February 2026 01:02:35 +0000 (0:00:01.253) 0:00:02.784 ***** 2026-02-05 01:05:33.075703 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 01:05:33.075715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.075722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.075726 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.075743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.075753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.075757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.075763 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.075771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.075776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.075784 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 01:05:33.075793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.075800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.075804 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.075808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.075815 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.075819 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.075823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.075832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.075839 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.075843 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.075847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.075852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.075908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.075963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.075990 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.076049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.076058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.076089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.076097 | orchestrator | 2026-02-05 01:05:33.076105 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-05 01:05:33.076112 | orchestrator | Thursday 05 February 2026 01:02:38 +0000 (0:00:02.970) 0:00:05.755 ***** 2026-02-05 01:05:33.076119 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:05:33.076127 | orchestrator | 2026-02-05 01:05:33.076134 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-05 01:05:33.076141 | orchestrator | Thursday 05 February 2026 01:02:40 +0000 (0:00:01.236) 0:00:06.991 ***** 2026-02-05 01:05:33.076153 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 01:05:33.076161 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.076166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.076179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.076184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.076205 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.076210 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.076216 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.076223 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.076228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.076237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.076247 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.076252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.076314 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.076319 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.076325 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.076330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.076334 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.076342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.076350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.076354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.076358 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 01:05:33.076367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.076371 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.076378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.076745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.076767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.076775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.076781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.076788 | orchestrator | 2026-02-05 01:05:33.076794 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-05 01:05:33.076801 | orchestrator | Thursday 05 February 2026 01:02:45 +0000 (0:00:05.213) 0:00:12.205 ***** 2026-02-05 01:05:33.076814 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 01:05:33.076830 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.076940 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.076979 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 01:05:33.076991 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.076998 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:05:33.077005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.077013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.077068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077082 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:05:33.077089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.077118 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:05:33.077123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077144 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:05:33.077150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.077156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077190 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:05:33.077196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.077202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077219 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:05:33.077224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.077230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077246 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:05:33.077252 | orchestrator | 2026-02-05 01:05:33.077259 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-05 01:05:33.077265 | orchestrator | Thursday 05 February 2026 01:02:46 +0000 (0:00:01.518) 0:00:13.724 ***** 2026-02-05 01:05:33.077274 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 01:05:33.077281 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.077287 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077297 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 01:05:33.077304 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.077325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.077361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077395 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:05:33.077402 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:05:33.077408 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:05:33.077416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.077422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:05:33.077478 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:05:33.077483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.077491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077502 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:05:33.077506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.077511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077524 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:05:33.077529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:05:33.077539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 01:05:33.077547 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:05:33.077551 | orchestrator | 2026-02-05 01:05:33.077555 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-05 01:05:33.077559 | orchestrator | Thursday 05 February 2026 01:02:48 +0000 (0:00:01.935) 0:00:15.659 ***** 2026-02-05 01:05:33.077566 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 01:05:33.077570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.077578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.077582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.077590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.077594 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.077598 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.077605 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.077609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.077613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.077620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.077628 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.077632 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.077637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.077645 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.077649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.077653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.077657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.077668 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.077673 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 01:05:33.077677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.077681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.077688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.077692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.077696 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.077707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.077711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.077715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.077720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.077724 | orchestrator | 2026-02-05 01:05:33.077731 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-05 01:05:33.077737 | orchestrator | Thursday 05 February 2026 01:02:55 +0000 (0:00:06.519) 0:00:22.178 ***** 2026-02-05 01:05:33.077743 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 01:05:33.077750 | orchestrator | 2026-02-05 01:05:33.077756 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-05 01:05:33.077765 | orchestrator | Thursday 05 February 2026 01:02:56 +0000 (0:00:01.094) 0:00:23.273 ***** 2026-02-05 01:05:33.077771 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083919, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3710928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077777 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083919, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3710928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077792 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083949, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3757112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077803 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083919, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3710928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077812 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083949, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3757112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077819 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083919, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3710928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077830 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083919, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3710928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.077837 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083909, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3702264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077843 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083919, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3710928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077859 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1083919, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3710928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077866 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083909, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3702264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077872 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083949, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3757112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077879 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083949, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3757112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077889 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083942, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.374537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077897 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083942, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.374537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077908 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083949, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3757112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077920 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083909, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3702264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077927 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083949, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3757112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077934 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083909, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3702264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077941 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083903, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3686547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077952 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083942, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.374537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077959 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083909, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3702264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.077971 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083922, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.371459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078351 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083903, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3686547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078382 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083909, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3702264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078390 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083922, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.371459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078398 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083934, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3729591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078402 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083942, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.374537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078413 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083942, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.374537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078425 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1083949, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3757112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.078462 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083934, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3729591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078467 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083903, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3686547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078471 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083903, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3686547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078475 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083903, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3686547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078479 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083922, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.371459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078486 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083922, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.371459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078497 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083923, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3716753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078501 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083923, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3716753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078508 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083934, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3729591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078512 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083942, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.374537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078516 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083917, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3706548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078520 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083934, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3729591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078531 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083923, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3716753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078535 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083922, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.371459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078539 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083917, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3706548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078546 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083917, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3706548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078550 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1083909, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3702264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.078554 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083923, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3716753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078558 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083948, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.375399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078569 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083903, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3686547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078573 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083948, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.375399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078577 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083917, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3706548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078583 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083897, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3677104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078587 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083897, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3677104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078591 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083934, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3729591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078595 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083948, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.375399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078606 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083922, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.371459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078610 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083948, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.375399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078614 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083957, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.379655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078621 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083923, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3716753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078625 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083897, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3677104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078629 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083957, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.379655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078640 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083934, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3729591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078647 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083897, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3677104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078651 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083957, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.379655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078655 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083947, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3746548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078667 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083947, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3746548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078673 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083947, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3746548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078684 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083917, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3706548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078700 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1083942, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.374537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.078711 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083907, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3690927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078717 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083907, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3690927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078724 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083957, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.379655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078736 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083923, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3716753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078742 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083907, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3690927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078750 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083948, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.375399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078761 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083901, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.368427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078771 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083901, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.368427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078777 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083947, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3746548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078784 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083901, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.368427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078794 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083931, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3722863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078801 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083917, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3706548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078808 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083931, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3722863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078820 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083926, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3719027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078831 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083948, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.375399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078838 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083897, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3677104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078844 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083931, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3722863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078856 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1083903, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3686547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.078864 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083926, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3719027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078876 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083907, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3690927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078881 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083956, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.378655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078888 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083926, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3719027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078892 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:05:33.078896 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083956, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.378655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078900 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:05:33.078904 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083897, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3677104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078911 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083957, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.379655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078915 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083901, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.368427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078926 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083956, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.378655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078932 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:05:33.078939 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083957, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.379655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078954 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083947, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3746548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078961 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083931, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3722863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078968 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1083922, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.371459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.078978 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083947, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3746548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078990 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083926, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3719027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.078996 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083907, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3690927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.079002 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083956, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.378655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.079009 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:05:33.079018 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083907, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3690927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.079026 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083901, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.368427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.079032 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1083934, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3729591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.079042 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083901, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.368427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.079052 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083931, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3722863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.079059 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083931, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3722863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.079068 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083926, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3719027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.079079 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083926, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3719027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.079086 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083956, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.378655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.079092 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:05:33.079100 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083956, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.378655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:05:33.079106 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:05:33.079116 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1083923, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3716753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.079127 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1083917, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3706548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.079134 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083948, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.375399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.079142 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083897, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3677104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.079152 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083957, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.379655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.079159 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1083947, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3746548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.079166 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1083907, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3690927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.079181 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1083901, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.368427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.079188 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1083931, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3722863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.079196 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1083926, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3719027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.079203 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1083956, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.378655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:05:33.079209 | orchestrator | 2026-02-05 01:05:33.079216 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-05 01:05:33.079223 | orchestrator | Thursday 05 February 2026 01:03:20 +0000 (0:00:23.880) 0:00:47.153 ***** 2026-02-05 01:05:33.079230 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 01:05:33.079237 | orchestrator | 2026-02-05 01:05:33.079247 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-05 01:05:33.079254 | orchestrator | Thursday 05 February 2026 01:03:21 +0000 (0:00:00.869) 0:00:48.022 ***** 2026-02-05 01:05:33.079260 | orchestrator | [WARNING]: Skipped 2026-02-05 01:05:33.079268 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079275 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-05 01:05:33.079285 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079293 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-05 01:05:33.079299 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 01:05:33.079305 | orchestrator | [WARNING]: Skipped 2026-02-05 01:05:33.079311 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079318 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-05 01:05:33.079323 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079329 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-05 01:05:33.079341 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 01:05:33.079346 | orchestrator | [WARNING]: Skipped 2026-02-05 01:05:33.079352 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079358 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-05 01:05:33.079364 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079371 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-05 01:05:33.079376 | orchestrator | [WARNING]: Skipped 2026-02-05 01:05:33.079382 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079388 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-05 01:05:33.079394 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079400 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-05 01:05:33.079406 | orchestrator | [WARNING]: Skipped 2026-02-05 01:05:33.079412 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079418 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-05 01:05:33.079457 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079465 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-05 01:05:33.079469 | orchestrator | [WARNING]: Skipped 2026-02-05 01:05:33.079473 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079477 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-05 01:05:33.079481 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079485 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-05 01:05:33.079491 | orchestrator | [WARNING]: Skipped 2026-02-05 01:05:33.079497 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079503 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-05 01:05:33.079509 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:05:33.079515 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-05 01:05:33.079521 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:05:33.079528 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 01:05:33.079533 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 01:05:33.079540 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 01:05:33.079546 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 01:05:33.079552 | orchestrator | 2026-02-05 01:05:33.079558 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-05 01:05:33.079564 | orchestrator | Thursday 05 February 2026 01:03:23 +0000 (0:00:02.367) 0:00:50.389 ***** 2026-02-05 01:05:33.079570 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 01:05:33.079576 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:05:33.079582 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 01:05:33.079588 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:05:33.079594 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 01:05:33.079598 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:05:33.079601 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 01:05:33.079605 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:05:33.079609 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 01:05:33.079613 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:05:33.079617 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 01:05:33.079627 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:05:33.079630 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-05 01:05:33.079634 | orchestrator | 2026-02-05 01:05:33.079638 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-05 01:05:33.079642 | orchestrator | Thursday 05 February 2026 01:03:37 +0000 (0:00:14.047) 0:01:04.437 ***** 2026-02-05 01:05:33.079646 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 01:05:33.079649 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:05:33.079658 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 01:05:33.079662 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:05:33.079666 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 01:05:33.079669 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:05:33.079673 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 01:05:33.079677 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:05:33.079682 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 01:05:33.079688 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:05:33.079694 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 01:05:33.079703 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:05:33.079711 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-05 01:05:33.079717 | orchestrator | 2026-02-05 01:05:33.079722 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-05 01:05:33.079728 | orchestrator | Thursday 05 February 2026 01:03:40 +0000 (0:00:03.130) 0:01:07.568 ***** 2026-02-05 01:05:33.079734 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 01:05:33.079740 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:05:33.079745 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 01:05:33.079750 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 01:05:33.079757 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:05:33.079762 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:05:33.079768 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 01:05:33.079840 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:05:33.079850 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 01:05:33.079854 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:05:33.079858 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 01:05:33.079862 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:05:33.079865 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-05 01:05:33.079869 | orchestrator | 2026-02-05 01:05:33.079875 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-05 01:05:33.079881 | orchestrator | Thursday 05 February 2026 01:03:42 +0000 (0:00:01.556) 0:01:09.124 ***** 2026-02-05 01:05:33.079884 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 01:05:33.079888 | orchestrator | 2026-02-05 01:05:33.079901 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-05 01:05:33.079905 | orchestrator | Thursday 05 February 2026 01:03:43 +0000 (0:00:00.698) 0:01:09.823 ***** 2026-02-05 01:05:33.079909 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:05:33.079913 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:05:33.079917 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:05:33.079920 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:05:33.079926 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:05:33.079932 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:05:33.079937 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:05:33.079944 | orchestrator | 2026-02-05 01:05:33.079954 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-05 01:05:33.079960 | orchestrator | Thursday 05 February 2026 01:03:43 +0000 (0:00:00.586) 0:01:10.409 ***** 2026-02-05 01:05:33.079968 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:05:33.079974 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:05:33.079980 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:05:33.079987 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:05:33.079993 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:05:33.079998 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:05:33.080004 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:05:33.080009 | orchestrator | 2026-02-05 01:05:33.080016 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-05 01:05:33.080022 | orchestrator | Thursday 05 February 2026 01:03:45 +0000 (0:00:02.006) 0:01:12.416 ***** 2026-02-05 01:05:33.080028 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:05:33.080034 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:05:33.080040 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:05:33.080045 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:05:33.080051 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:05:33.080056 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:05:33.080062 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:05:33.080068 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:05:33.080074 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:05:33.080086 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:05:33.080092 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:05:33.080098 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:05:33.080103 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:05:33.080109 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:05:33.080114 | orchestrator | 2026-02-05 01:05:33.080120 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-05 01:05:33.080125 | orchestrator | Thursday 05 February 2026 01:03:46 +0000 (0:00:01.361) 0:01:13.778 ***** 2026-02-05 01:05:33.080131 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 01:05:33.080137 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 01:05:33.080143 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:05:33.080148 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:05:33.080153 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 01:05:33.080159 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:05:33.080164 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 01:05:33.080170 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:05:33.080181 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 01:05:33.080187 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:05:33.080193 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-05 01:05:33.080199 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 01:05:33.080205 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:05:33.080210 | orchestrator | 2026-02-05 01:05:33.080216 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-05 01:05:33.080228 | orchestrator | Thursday 05 February 2026 01:03:48 +0000 (0:00:01.191) 0:01:14.970 ***** 2026-02-05 01:05:33.080234 | orchestrator | [WARNING]: Skipped 2026-02-05 01:05:33.080241 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-05 01:05:33.080246 | orchestrator | due to this access issue: 2026-02-05 01:05:33.080252 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-05 01:05:33.080257 | orchestrator | not a directory 2026-02-05 01:05:33.080263 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 01:05:33.080269 | orchestrator | 2026-02-05 01:05:33.080275 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-05 01:05:33.080281 | orchestrator | Thursday 05 February 2026 01:03:49 +0000 (0:00:01.047) 0:01:16.017 ***** 2026-02-05 01:05:33.080287 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:05:33.080293 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:05:33.080299 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:05:33.080305 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:05:33.080312 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:05:33.080318 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:05:33.080324 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:05:33.080330 | orchestrator | 2026-02-05 01:05:33.080336 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-05 01:05:33.080343 | orchestrator | Thursday 05 February 2026 01:03:49 +0000 (0:00:00.760) 0:01:16.778 ***** 2026-02-05 01:05:33.080349 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:05:33.080355 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:05:33.080361 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:05:33.080367 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:05:33.080373 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:05:33.080380 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:05:33.080384 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:05:33.080388 | orchestrator | 2026-02-05 01:05:33.080392 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-05 01:05:33.080396 | orchestrator | Thursday 05 February 2026 01:03:50 +0000 (0:00:00.677) 0:01:17.455 ***** 2026-02-05 01:05:33.080401 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 01:05:33.080408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.080417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.080421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.080453 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.080492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.080499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.080504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.080510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:05:33.080524 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.080530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.080536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.080549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.080559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.080565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.080573 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 01:05:33.080591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.080597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.080603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.080614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.080621 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.080627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.080633 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.080649 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.080656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.080662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.080673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:05:33.080677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.080682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:05:33.080686 | orchestrator | 2026-02-05 01:05:33.080690 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-05 01:05:33.080694 | orchestrator | Thursday 05 February 2026 01:03:54 +0000 (0:00:04.112) 0:01:21.567 ***** 2026-02-05 01:05:33.080698 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-05 01:05:33.080701 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:05:33.080705 | orchestrator | 2026-02-05 01:05:33.080713 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:05:33.080718 | orchestrator | Thursday 05 February 2026 01:03:55 +0000 (0:00:01.052) 0:01:22.620 ***** 2026-02-05 01:05:33.080722 | orchestrator | 2026-02-05 01:05:33.080726 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:05:33.080730 | orchestrator | Thursday 05 February 2026 01:03:55 +0000 (0:00:00.065) 0:01:22.685 ***** 2026-02-05 01:05:33.080734 | orchestrator | 2026-02-05 01:05:33.080738 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:05:33.080742 | orchestrator | Thursday 05 February 2026 01:03:55 +0000 (0:00:00.060) 0:01:22.746 ***** 2026-02-05 01:05:33.080746 | orchestrator | 2026-02-05 01:05:33.080749 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:05:33.080753 | orchestrator | Thursday 05 February 2026 01:03:55 +0000 (0:00:00.060) 0:01:22.806 ***** 2026-02-05 01:05:33.080757 | orchestrator | 2026-02-05 01:05:33.080761 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:05:33.080765 | orchestrator | Thursday 05 February 2026 01:03:56 +0000 (0:00:00.164) 0:01:22.971 ***** 2026-02-05 01:05:33.080768 | orchestrator | 2026-02-05 01:05:33.080772 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:05:33.080776 | orchestrator | Thursday 05 February 2026 01:03:56 +0000 (0:00:00.059) 0:01:23.031 ***** 2026-02-05 01:05:33.080780 | orchestrator | 2026-02-05 01:05:33.080786 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:05:33.080790 | orchestrator | Thursday 05 February 2026 01:03:56 +0000 (0:00:00.059) 0:01:23.090 ***** 2026-02-05 01:05:33.080794 | orchestrator | 2026-02-05 01:05:33.080797 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-05 01:05:33.080801 | orchestrator | Thursday 05 February 2026 01:03:56 +0000 (0:00:00.080) 0:01:23.170 ***** 2026-02-05 01:05:33.080805 | orchestrator | changed: [testbed-manager] 2026-02-05 01:05:33.080809 | orchestrator | 2026-02-05 01:05:33.080813 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-05 01:05:33.080816 | orchestrator | Thursday 05 February 2026 01:04:16 +0000 (0:00:20.356) 0:01:43.527 ***** 2026-02-05 01:05:33.080820 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:05:33.080824 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:05:33.080828 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:05:33.080832 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:05:33.080835 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:05:33.080839 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:05:33.080843 | orchestrator | changed: [testbed-manager] 2026-02-05 01:05:33.080847 | orchestrator | 2026-02-05 01:05:33.080850 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-05 01:05:33.080854 | orchestrator | Thursday 05 February 2026 01:04:28 +0000 (0:00:11.822) 0:01:55.349 ***** 2026-02-05 01:05:33.080858 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:05:33.080862 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:05:33.080866 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:05:33.080870 | orchestrator | 2026-02-05 01:05:33.080873 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-05 01:05:33.080877 | orchestrator | Thursday 05 February 2026 01:04:34 +0000 (0:00:05.484) 0:02:00.834 ***** 2026-02-05 01:05:33.080881 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:05:33.080885 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:05:33.080889 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:05:33.080892 | orchestrator | 2026-02-05 01:05:33.080896 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-05 01:05:33.080902 | orchestrator | Thursday 05 February 2026 01:04:44 +0000 (0:00:10.896) 0:02:11.730 ***** 2026-02-05 01:05:33.080908 | orchestrator | changed: [testbed-manager] 2026-02-05 01:05:33.080914 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:05:33.080921 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:05:33.080938 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:05:33.080943 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:05:33.080949 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:05:33.080960 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:05:33.080966 | orchestrator | 2026-02-05 01:05:33.080972 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-05 01:05:33.080977 | orchestrator | Thursday 05 February 2026 01:04:58 +0000 (0:00:13.987) 0:02:25.718 ***** 2026-02-05 01:05:33.080983 | orchestrator | changed: [testbed-manager] 2026-02-05 01:05:33.080989 | orchestrator | 2026-02-05 01:05:33.080994 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-05 01:05:33.081000 | orchestrator | Thursday 05 February 2026 01:05:10 +0000 (0:00:11.649) 0:02:37.368 ***** 2026-02-05 01:05:33.081006 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:05:33.081012 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:05:33.081018 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:05:33.081024 | orchestrator | 2026-02-05 01:05:33.081031 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-05 01:05:33.081038 | orchestrator | Thursday 05 February 2026 01:05:16 +0000 (0:00:05.666) 0:02:43.034 ***** 2026-02-05 01:05:33.081044 | orchestrator | changed: [testbed-manager] 2026-02-05 01:05:33.081051 | orchestrator | 2026-02-05 01:05:33.081057 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-05 01:05:33.081063 | orchestrator | Thursday 05 February 2026 01:05:20 +0000 (0:00:04.312) 0:02:47.347 ***** 2026-02-05 01:05:33.081069 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:05:33.081076 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:05:33.081083 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:05:33.081087 | orchestrator | 2026-02-05 01:05:33.081091 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:05:33.081096 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-05 01:05:33.081100 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 01:05:33.081104 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 01:05:33.081108 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 01:05:33.081112 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 01:05:33.081117 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 01:05:33.081124 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 01:05:33.081131 | orchestrator | 2026-02-05 01:05:33.081140 | orchestrator | 2026-02-05 01:05:33.081149 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:05:33.081155 | orchestrator | Thursday 05 February 2026 01:05:31 +0000 (0:00:10.473) 0:02:57.820 ***** 2026-02-05 01:05:33.081166 | orchestrator | =============================================================================== 2026-02-05 01:05:33.081173 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.88s 2026-02-05 01:05:33.081180 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.36s 2026-02-05 01:05:33.081186 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.05s 2026-02-05 01:05:33.081191 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.99s 2026-02-05 01:05:33.081206 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 11.82s 2026-02-05 01:05:33.081212 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.65s 2026-02-05 01:05:33.081218 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.90s 2026-02-05 01:05:33.081223 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.47s 2026-02-05 01:05:33.081229 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.52s 2026-02-05 01:05:33.081234 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.67s 2026-02-05 01:05:33.081243 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.48s 2026-02-05 01:05:33.081251 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.21s 2026-02-05 01:05:33.081257 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.31s 2026-02-05 01:05:33.081263 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.11s 2026-02-05 01:05:33.081268 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.13s 2026-02-05 01:05:33.081274 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.97s 2026-02-05 01:05:33.081280 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.37s 2026-02-05 01:05:33.081286 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.01s 2026-02-05 01:05:33.081292 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.94s 2026-02-05 01:05:33.081298 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.56s 2026-02-05 01:05:33.081310 | orchestrator | 2026-02-05 01:05:33 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:33.081318 | orchestrator | 2026-02-05 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:36.124398 | orchestrator | 2026-02-05 01:05:36 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:36.126476 | orchestrator | 2026-02-05 01:05:36 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:36.128293 | orchestrator | 2026-02-05 01:05:36 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:05:36.130950 | orchestrator | 2026-02-05 01:05:36 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:36.131268 | orchestrator | 2026-02-05 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:39.167035 | orchestrator | 2026-02-05 01:05:39 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:39.169005 | orchestrator | 2026-02-05 01:05:39 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:39.170218 | orchestrator | 2026-02-05 01:05:39 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:05:39.171792 | orchestrator | 2026-02-05 01:05:39 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:39.171821 | orchestrator | 2026-02-05 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:42.203937 | orchestrator | 2026-02-05 01:05:42 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:42.204783 | orchestrator | 2026-02-05 01:05:42 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:42.204883 | orchestrator | 2026-02-05 01:05:42 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:05:42.206095 | orchestrator | 2026-02-05 01:05:42 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:42.206151 | orchestrator | 2026-02-05 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:45.237761 | orchestrator | 2026-02-05 01:05:45 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:45.239560 | orchestrator | 2026-02-05 01:05:45 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:45.241202 | orchestrator | 2026-02-05 01:05:45 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:05:45.243387 | orchestrator | 2026-02-05 01:05:45 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:45.243465 | orchestrator | 2026-02-05 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:48.286579 | orchestrator | 2026-02-05 01:05:48 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:48.287974 | orchestrator | 2026-02-05 01:05:48 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:48.289857 | orchestrator | 2026-02-05 01:05:48 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:05:48.291140 | orchestrator | 2026-02-05 01:05:48 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:48.291247 | orchestrator | 2026-02-05 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:51.326883 | orchestrator | 2026-02-05 01:05:51 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:51.328454 | orchestrator | 2026-02-05 01:05:51 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:51.329639 | orchestrator | 2026-02-05 01:05:51 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:05:51.331022 | orchestrator | 2026-02-05 01:05:51 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:51.331248 | orchestrator | 2026-02-05 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:54.358192 | orchestrator | 2026-02-05 01:05:54 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:54.359739 | orchestrator | 2026-02-05 01:05:54 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:54.359759 | orchestrator | 2026-02-05 01:05:54 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:05:54.360168 | orchestrator | 2026-02-05 01:05:54 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:54.360202 | orchestrator | 2026-02-05 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:57.384640 | orchestrator | 2026-02-05 01:05:57 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:05:57.385341 | orchestrator | 2026-02-05 01:05:57 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:05:57.386227 | orchestrator | 2026-02-05 01:05:57 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:05:57.387065 | orchestrator | 2026-02-05 01:05:57 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:05:57.387103 | orchestrator | 2026-02-05 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:00.425117 | orchestrator | 2026-02-05 01:06:00 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:06:00.425260 | orchestrator | 2026-02-05 01:06:00 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:00.428345 | orchestrator | 2026-02-05 01:06:00 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:00.430716 | orchestrator | 2026-02-05 01:06:00 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:00.430771 | orchestrator | 2026-02-05 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:03.457540 | orchestrator | 2026-02-05 01:06:03 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:06:03.459731 | orchestrator | 2026-02-05 01:06:03 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:03.460789 | orchestrator | 2026-02-05 01:06:03 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:03.461395 | orchestrator | 2026-02-05 01:06:03 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:03.461509 | orchestrator | 2026-02-05 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:06.491591 | orchestrator | 2026-02-05 01:06:06 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:06:06.491795 | orchestrator | 2026-02-05 01:06:06 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:06.494759 | orchestrator | 2026-02-05 01:06:06 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:06.495335 | orchestrator | 2026-02-05 01:06:06 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:06.495435 | orchestrator | 2026-02-05 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:09.536573 | orchestrator | 2026-02-05 01:06:09 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:06:09.539757 | orchestrator | 2026-02-05 01:06:09 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:09.541076 | orchestrator | 2026-02-05 01:06:09 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:09.543258 | orchestrator | 2026-02-05 01:06:09 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:09.543301 | orchestrator | 2026-02-05 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:12.589760 | orchestrator | 2026-02-05 01:06:12 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:06:12.590091 | orchestrator | 2026-02-05 01:06:12 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:12.590557 | orchestrator | 2026-02-05 01:06:12 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:12.591194 | orchestrator | 2026-02-05 01:06:12 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:12.591221 | orchestrator | 2026-02-05 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:15.636457 | orchestrator | 2026-02-05 01:06:15 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:06:15.638776 | orchestrator | 2026-02-05 01:06:15 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:15.641167 | orchestrator | 2026-02-05 01:06:15 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:15.644240 | orchestrator | 2026-02-05 01:06:15 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:15.644280 | orchestrator | 2026-02-05 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:18.688885 | orchestrator | 2026-02-05 01:06:18 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state STARTED 2026-02-05 01:06:18.691741 | orchestrator | 2026-02-05 01:06:18 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:18.691824 | orchestrator | 2026-02-05 01:06:18 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:18.692988 | orchestrator | 2026-02-05 01:06:18 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:18.693041 | orchestrator | 2026-02-05 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:21.741739 | orchestrator | 2026-02-05 01:06:21 | INFO  | Task e0b08963-1279-4f3b-9f24-5d92527a92d4 is in state SUCCESS 2026-02-05 01:06:21.742672 | orchestrator | 2026-02-05 01:06:21.742706 | orchestrator | 2026-02-05 01:06:21.742714 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:06:21.742721 | orchestrator | 2026-02-05 01:06:21.742729 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:06:21.742736 | orchestrator | Thursday 05 February 2026 01:03:41 +0000 (0:00:00.440) 0:00:00.441 ***** 2026-02-05 01:06:21.742742 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:06:21.742750 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:06:21.742755 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:06:21.742759 | orchestrator | 2026-02-05 01:06:21.742764 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:06:21.742769 | orchestrator | Thursday 05 February 2026 01:03:41 +0000 (0:00:00.289) 0:00:00.730 ***** 2026-02-05 01:06:21.742773 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-05 01:06:21.742778 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-05 01:06:21.742782 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-05 01:06:21.742787 | orchestrator | 2026-02-05 01:06:21.742791 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-05 01:06:21.742796 | orchestrator | 2026-02-05 01:06:21.742801 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-05 01:06:21.742806 | orchestrator | Thursday 05 February 2026 01:03:42 +0000 (0:00:00.419) 0:00:01.150 ***** 2026-02-05 01:06:21.742811 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:06:21.742816 | orchestrator | 2026-02-05 01:06:21.742820 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-05 01:06:21.742824 | orchestrator | Thursday 05 February 2026 01:03:42 +0000 (0:00:00.488) 0:00:01.639 ***** 2026-02-05 01:06:21.742829 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-05 01:06:21.742833 | orchestrator | 2026-02-05 01:06:21.742838 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-05 01:06:21.742843 | orchestrator | Thursday 05 February 2026 01:03:46 +0000 (0:00:04.294) 0:00:05.933 ***** 2026-02-05 01:06:21.742848 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-05 01:06:21.742853 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-05 01:06:21.742857 | orchestrator | 2026-02-05 01:06:21.742871 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-05 01:06:21.742875 | orchestrator | Thursday 05 February 2026 01:03:52 +0000 (0:00:06.030) 0:00:11.964 ***** 2026-02-05 01:06:21.742879 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:06:21.742883 | orchestrator | 2026-02-05 01:06:21.742886 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-05 01:06:21.742890 | orchestrator | Thursday 05 February 2026 01:03:55 +0000 (0:00:02.768) 0:00:14.732 ***** 2026-02-05 01:06:21.742894 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:06:21.742898 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-05 01:06:21.742902 | orchestrator | 2026-02-05 01:06:21.742906 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-05 01:06:21.742920 | orchestrator | Thursday 05 February 2026 01:03:58 +0000 (0:00:03.199) 0:00:17.931 ***** 2026-02-05 01:06:21.742924 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:06:21.742928 | orchestrator | 2026-02-05 01:06:21.742932 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-05 01:06:21.742935 | orchestrator | Thursday 05 February 2026 01:04:01 +0000 (0:00:02.965) 0:00:20.897 ***** 2026-02-05 01:06:21.742939 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-05 01:06:21.742943 | orchestrator | 2026-02-05 01:06:21.742947 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-05 01:06:21.742950 | orchestrator | Thursday 05 February 2026 01:04:05 +0000 (0:00:03.237) 0:00:24.135 ***** 2026-02-05 01:06:21.742964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:06:21.742974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:06:21.742982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:06:21.742986 | orchestrator | 2026-02-05 01:06:21.742990 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-05 01:06:21.742994 | orchestrator | Thursday 05 February 2026 01:04:08 +0000 (0:00:03.171) 0:00:27.306 ***** 2026-02-05 01:06:21.742998 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:06:21.743023 | orchestrator | 2026-02-05 01:06:21.743030 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-05 01:06:21.743034 | orchestrator | Thursday 05 February 2026 01:04:08 +0000 (0:00:00.566) 0:00:27.873 ***** 2026-02-05 01:06:21.743038 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:06:21.743045 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:21.743051 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:06:21.743057 | orchestrator | 2026-02-05 01:06:21.743063 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-05 01:06:21.743068 | orchestrator | Thursday 05 February 2026 01:04:12 +0000 (0:00:03.388) 0:00:31.262 ***** 2026-02-05 01:06:21.743074 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:06:21.743079 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:06:21.743085 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:06:21.743090 | orchestrator | 2026-02-05 01:06:21.743096 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-05 01:06:21.743102 | orchestrator | Thursday 05 February 2026 01:04:13 +0000 (0:00:01.425) 0:00:32.687 ***** 2026-02-05 01:06:21.743108 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:06:21.743114 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:06:21.743120 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:06:21.743130 | orchestrator | 2026-02-05 01:06:21.743134 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-05 01:06:21.743137 | orchestrator | Thursday 05 February 2026 01:04:14 +0000 (0:00:01.135) 0:00:33.823 ***** 2026-02-05 01:06:21.743141 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:06:21.743145 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:06:21.743149 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:06:21.743166 | orchestrator | 2026-02-05 01:06:21.743171 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-05 01:06:21.743176 | orchestrator | Thursday 05 February 2026 01:04:15 +0000 (0:00:00.663) 0:00:34.486 ***** 2026-02-05 01:06:21.743187 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:21.743195 | orchestrator | 2026-02-05 01:06:21.743203 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-05 01:06:21.743208 | orchestrator | Thursday 05 February 2026 01:04:15 +0000 (0:00:00.210) 0:00:34.696 ***** 2026-02-05 01:06:21.743214 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:21.743219 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:21.743225 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:21.743231 | orchestrator | 2026-02-05 01:06:21.743237 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-05 01:06:21.743243 | orchestrator | Thursday 05 February 2026 01:04:15 +0000 (0:00:00.258) 0:00:34.955 ***** 2026-02-05 01:06:21.743248 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:06:21.743254 | orchestrator | 2026-02-05 01:06:21.743260 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-05 01:06:21.743266 | orchestrator | Thursday 05 February 2026 01:04:16 +0000 (0:00:00.505) 0:00:35.460 ***** 2026-02-05 01:06:21.743277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:06:21.743295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:06:21.743308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:06:21.743314 | orchestrator | 2026-02-05 01:06:21.743321 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-05 01:06:21.743327 | orchestrator | Thursday 05 February 2026 01:04:24 +0000 (0:00:07.626) 0:00:43.087 ***** 2026-02-05 01:06:21.743338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 01:06:21.743346 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:21.743352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 01:06:21.743356 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:21.743363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 01:06:21.743370 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:21.743374 | orchestrator | 2026-02-05 01:06:21.743377 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-05 01:06:21.743381 | orchestrator | Thursday 05 February 2026 01:04:26 +0000 (0:00:02.830) 0:00:45.917 ***** 2026-02-05 01:06:21.743387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 01:06:21.743429 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:21.743438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 01:06:21.743446 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:21.743452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 01:06:21.743457 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:21.743463 | orchestrator | 2026-02-05 01:06:21.743471 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-05 01:06:21.743480 | orchestrator | Thursday 05 February 2026 01:04:30 +0000 (0:00:03.950) 0:00:49.868 ***** 2026-02-05 01:06:21.743485 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:21.743492 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:21.743498 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:21.743503 | orchestrator | 2026-02-05 01:06:21.743509 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-05 01:06:21.743514 | orchestrator | Thursday 05 February 2026 01:04:33 +0000 (0:00:03.116) 0:00:52.985 ***** 2026-02-05 01:06:21.743521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:06:21.743537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:06:21.743547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:06:21.743553 | orchestrator | 2026-02-05 01:06:21.743559 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-05 01:06:21.743564 | orchestrator | Thursday 05 February 2026 01:04:38 +0000 (0:00:04.314) 0:00:57.299 ***** 2026-02-05 01:06:21.743570 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:06:21.743576 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:21.743587 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:06:21.743593 | orchestrator | 2026-02-05 01:06:21.743599 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-05 01:06:21.743605 | orchestrator | Thursday 05 February 2026 01:04:42 +0000 (0:00:04.579) 0:01:01.878 ***** 2026-02-05 01:06:21.743611 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:21.743618 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:21.743624 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:21.743631 | orchestrator | 2026-02-05 01:06:21.743637 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-05 01:06:21.743643 | orchestrator | Thursday 05 February 2026 01:04:48 +0000 (0:00:05.202) 0:01:07.081 ***** 2026-02-05 01:06:21.743650 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:21.743735 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:21.743741 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:21.743747 | orchestrator | 2026-02-05 01:06:21.743754 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-05 01:06:21.743760 | orchestrator | Thursday 05 February 2026 01:04:53 +0000 (0:00:05.018) 0:01:12.100 ***** 2026-02-05 01:06:21.743766 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:21.743772 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:21.743778 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:21.743784 | orchestrator | 2026-02-05 01:06:21.743789 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-05 01:06:21.743795 | orchestrator | Thursday 05 February 2026 01:04:57 +0000 (0:00:04.166) 0:01:16.266 ***** 2026-02-05 01:06:21.743801 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:21.743806 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:21.743812 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:21.743818 | orchestrator | 2026-02-05 01:06:21.743824 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-05 01:06:21.743829 | orchestrator | Thursday 05 February 2026 01:05:01 +0000 (0:00:03.888) 0:01:20.155 ***** 2026-02-05 01:06:21.743834 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:21.743840 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:21.743845 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:21.743851 | orchestrator | 2026-02-05 01:06:21.743857 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-05 01:06:21.743862 | orchestrator | Thursday 05 February 2026 01:05:01 +0000 (0:00:00.390) 0:01:20.546 ***** 2026-02-05 01:06:21.743868 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-05 01:06:21.743874 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:21.743881 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-05 01:06:21.743887 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-05 01:06:21.743893 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:21.743900 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:21.743906 | orchestrator | 2026-02-05 01:06:21.743912 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-05 01:06:21.743919 | orchestrator | Thursday 05 February 2026 01:05:06 +0000 (0:00:04.871) 0:01:25.417 ***** 2026-02-05 01:06:21.743926 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:06:21.743936 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:06:21.743943 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:21.743949 | orchestrator | 2026-02-05 01:06:21.743956 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-05 01:06:21.743962 | orchestrator | Thursday 05 February 2026 01:05:10 +0000 (0:00:04.457) 0:01:29.875 ***** 2026-02-05 01:06:21.743969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:06:21.743983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:06:21.743990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:06:21.743997 | orchestrator | 2026-02-05 01:06:21.744001 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-05 01:06:21.744005 | orchestrator | Thursday 05 February 2026 01:05:15 +0000 (0:00:04.228) 0:01:34.103 ***** 2026-02-05 01:06:21.744009 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:21.744012 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:21.744016 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:21.744020 | orchestrator | 2026-02-05 01:06:21.744024 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-05 01:06:21.744027 | orchestrator | Thursday 05 February 2026 01:05:15 +0000 (0:00:00.225) 0:01:34.328 ***** 2026-02-05 01:06:21.744034 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:21.744041 | orchestrator | 2026-02-05 01:06:21.744050 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-05 01:06:21.744056 | orchestrator | Thursday 05 February 2026 01:05:17 +0000 (0:00:01.902) 0:01:36.230 ***** 2026-02-05 01:06:21.744061 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:21.744067 | orchestrator | 2026-02-05 01:06:21.744073 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-05 01:06:21.744079 | orchestrator | Thursday 05 February 2026 01:05:19 +0000 (0:00:02.456) 0:01:38.687 ***** 2026-02-05 01:06:21.744085 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:21.744091 | orchestrator | 2026-02-05 01:06:21.744098 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-05 01:06:21.744104 | orchestrator | Thursday 05 February 2026 01:05:22 +0000 (0:00:02.437) 0:01:41.124 ***** 2026-02-05 01:06:21.744110 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:21.744116 | orchestrator | 2026-02-05 01:06:21.744123 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-05 01:06:21.744133 | orchestrator | Thursday 05 February 2026 01:05:50 +0000 (0:00:28.098) 0:02:09.223 ***** 2026-02-05 01:06:21.744137 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:21.744141 | orchestrator | 2026-02-05 01:06:21.744145 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-05 01:06:21.744149 | orchestrator | Thursday 05 February 2026 01:05:51 +0000 (0:00:01.723) 0:02:10.946 ***** 2026-02-05 01:06:21.744152 | orchestrator | 2026-02-05 01:06:21.744156 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-05 01:06:21.744160 | orchestrator | Thursday 05 February 2026 01:05:52 +0000 (0:00:00.177) 0:02:11.124 ***** 2026-02-05 01:06:21.744164 | orchestrator | 2026-02-05 01:06:21.744167 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-05 01:06:21.744207 | orchestrator | Thursday 05 February 2026 01:05:52 +0000 (0:00:00.058) 0:02:11.183 ***** 2026-02-05 01:06:21.744211 | orchestrator | 2026-02-05 01:06:21.744215 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-05 01:06:21.744220 | orchestrator | Thursday 05 February 2026 01:05:52 +0000 (0:00:00.057) 0:02:11.240 ***** 2026-02-05 01:06:21.744228 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:21.744237 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:06:21.744243 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:06:21.744256 | orchestrator | 2026-02-05 01:06:21.744263 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:06:21.744270 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 01:06:21.744277 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 01:06:21.744283 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 01:06:21.744289 | orchestrator | 2026-02-05 01:06:21.744295 | orchestrator | 2026-02-05 01:06:21.744301 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:06:21.744308 | orchestrator | Thursday 05 February 2026 01:06:19 +0000 (0:00:27.105) 0:02:38.346 ***** 2026-02-05 01:06:21.744314 | orchestrator | =============================================================================== 2026-02-05 01:06:21.744324 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.10s 2026-02-05 01:06:21.744331 | orchestrator | glance : Restart glance-api container ---------------------------------- 27.11s 2026-02-05 01:06:21.744337 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 7.63s 2026-02-05 01:06:21.744343 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.03s 2026-02-05 01:06:21.744350 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.20s 2026-02-05 01:06:21.744356 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.02s 2026-02-05 01:06:21.744362 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.87s 2026-02-05 01:06:21.744368 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 4.58s 2026-02-05 01:06:21.744375 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.46s 2026-02-05 01:06:21.744380 | orchestrator | glance : Copying over config.json files for services -------------------- 4.31s 2026-02-05 01:06:21.744387 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.29s 2026-02-05 01:06:21.744403 | orchestrator | glance : Check glance containers ---------------------------------------- 4.23s 2026-02-05 01:06:21.744410 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.17s 2026-02-05 01:06:21.744417 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.95s 2026-02-05 01:06:21.744423 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.89s 2026-02-05 01:06:21.744429 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.39s 2026-02-05 01:06:21.744436 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.24s 2026-02-05 01:06:21.744443 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.20s 2026-02-05 01:06:21.744450 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.17s 2026-02-05 01:06:21.744456 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.12s 2026-02-05 01:06:21.744463 | orchestrator | 2026-02-05 01:06:21 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:06:21.745434 | orchestrator | 2026-02-05 01:06:21 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:21.746943 | orchestrator | 2026-02-05 01:06:21 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:21.750212 | orchestrator | 2026-02-05 01:06:21 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:21.750264 | orchestrator | 2026-02-05 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:24.782224 | orchestrator | 2026-02-05 01:06:24 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:06:24.785288 | orchestrator | 2026-02-05 01:06:24 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:24.786629 | orchestrator | 2026-02-05 01:06:24 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:24.787333 | orchestrator | 2026-02-05 01:06:24 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:24.787444 | orchestrator | 2026-02-05 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:27.836610 | orchestrator | 2026-02-05 01:06:27 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:06:27.841017 | orchestrator | 2026-02-05 01:06:27 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:27.843871 | orchestrator | 2026-02-05 01:06:27 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:27.845161 | orchestrator | 2026-02-05 01:06:27 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:27.845201 | orchestrator | 2026-02-05 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:30.900169 | orchestrator | 2026-02-05 01:06:30 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:06:30.902527 | orchestrator | 2026-02-05 01:06:30 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:30.904576 | orchestrator | 2026-02-05 01:06:30 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:30.906746 | orchestrator | 2026-02-05 01:06:30 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:30.906789 | orchestrator | 2026-02-05 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:33.950505 | orchestrator | 2026-02-05 01:06:33 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:06:33.952574 | orchestrator | 2026-02-05 01:06:33 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:33.953123 | orchestrator | 2026-02-05 01:06:33 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:33.955041 | orchestrator | 2026-02-05 01:06:33 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:33.956143 | orchestrator | 2026-02-05 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:37.005852 | orchestrator | 2026-02-05 01:06:37 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:06:37.007050 | orchestrator | 2026-02-05 01:06:37 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:37.010598 | orchestrator | 2026-02-05 01:06:37 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:37.012588 | orchestrator | 2026-02-05 01:06:37 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:37.013127 | orchestrator | 2026-02-05 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:40.062164 | orchestrator | 2026-02-05 01:06:40 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:06:40.065342 | orchestrator | 2026-02-05 01:06:40 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:40.065420 | orchestrator | 2026-02-05 01:06:40 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:40.069704 | orchestrator | 2026-02-05 01:06:40 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:40.069805 | orchestrator | 2026-02-05 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:43.112613 | orchestrator | 2026-02-05 01:06:43 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:06:43.113357 | orchestrator | 2026-02-05 01:06:43 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:43.114367 | orchestrator | 2026-02-05 01:06:43 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:43.116987 | orchestrator | 2026-02-05 01:06:43 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:43.117036 | orchestrator | 2026-02-05 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:46.156680 | orchestrator | 2026-02-05 01:06:46 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:06:46.157772 | orchestrator | 2026-02-05 01:06:46 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:46.159057 | orchestrator | 2026-02-05 01:06:46 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:46.160597 | orchestrator | 2026-02-05 01:06:46 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:46.160637 | orchestrator | 2026-02-05 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:49.213195 | orchestrator | 2026-02-05 01:06:49 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:06:49.214159 | orchestrator | 2026-02-05 01:06:49 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:49.215339 | orchestrator | 2026-02-05 01:06:49 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:49.216833 | orchestrator | 2026-02-05 01:06:49 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:49.216867 | orchestrator | 2026-02-05 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:52.248067 | orchestrator | 2026-02-05 01:06:52 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:06:52.249569 | orchestrator | 2026-02-05 01:06:52 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:52.250931 | orchestrator | 2026-02-05 01:06:52 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:52.253156 | orchestrator | 2026-02-05 01:06:52 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:52.253200 | orchestrator | 2026-02-05 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:55.290573 | orchestrator | 2026-02-05 01:06:55 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:06:55.290624 | orchestrator | 2026-02-05 01:06:55 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:55.290884 | orchestrator | 2026-02-05 01:06:55 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:55.291778 | orchestrator | 2026-02-05 01:06:55 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:55.292030 | orchestrator | 2026-02-05 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:58.332422 | orchestrator | 2026-02-05 01:06:58 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:06:58.332700 | orchestrator | 2026-02-05 01:06:58 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:06:58.333804 | orchestrator | 2026-02-05 01:06:58 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:06:58.334741 | orchestrator | 2026-02-05 01:06:58 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:06:58.334771 | orchestrator | 2026-02-05 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:01.369681 | orchestrator | 2026-02-05 01:07:01 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:01.369971 | orchestrator | 2026-02-05 01:07:01 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state STARTED 2026-02-05 01:07:01.370689 | orchestrator | 2026-02-05 01:07:01 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:01.371254 | orchestrator | 2026-02-05 01:07:01 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:01.371277 | orchestrator | 2026-02-05 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:04.403927 | orchestrator | 2026-02-05 01:07:04 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:04.407123 | orchestrator | 2026-02-05 01:07:04 | INFO  | Task a08df1ed-0aa6-45b6-b9bb-78d7ce46e77a is in state SUCCESS 2026-02-05 01:07:04.408424 | orchestrator | 2026-02-05 01:07:04.408472 | orchestrator | 2026-02-05 01:07:04.408479 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:07:04.408485 | orchestrator | 2026-02-05 01:07:04.408490 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:07:04.408512 | orchestrator | Thursday 05 February 2026 01:03:54 +0000 (0:00:00.238) 0:00:00.238 ***** 2026-02-05 01:07:04.408518 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:07:04.408548 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:07:04.408563 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:07:04.408567 | orchestrator | 2026-02-05 01:07:04.408570 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:07:04.408573 | orchestrator | Thursday 05 February 2026 01:03:54 +0000 (0:00:00.270) 0:00:00.509 ***** 2026-02-05 01:07:04.408577 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-05 01:07:04.408580 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-05 01:07:04.408583 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-05 01:07:04.408587 | orchestrator | 2026-02-05 01:07:04.408595 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-05 01:07:04.408598 | orchestrator | 2026-02-05 01:07:04.408602 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 01:07:04.408605 | orchestrator | Thursday 05 February 2026 01:03:54 +0000 (0:00:00.365) 0:00:00.874 ***** 2026-02-05 01:07:04.408608 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:07:04.408612 | orchestrator | 2026-02-05 01:07:04.408615 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-05 01:07:04.408619 | orchestrator | Thursday 05 February 2026 01:03:55 +0000 (0:00:00.482) 0:00:01.357 ***** 2026-02-05 01:07:04.408646 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-05 01:07:04.408649 | orchestrator | 2026-02-05 01:07:04.408653 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-05 01:07:04.408656 | orchestrator | Thursday 05 February 2026 01:03:58 +0000 (0:00:02.883) 0:00:04.241 ***** 2026-02-05 01:07:04.408664 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-05 01:07:04.408668 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-05 01:07:04.408671 | orchestrator | 2026-02-05 01:07:04.408674 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-05 01:07:04.408695 | orchestrator | Thursday 05 February 2026 01:04:03 +0000 (0:00:05.399) 0:00:09.641 ***** 2026-02-05 01:07:04.408698 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:07:04.408702 | orchestrator | 2026-02-05 01:07:04.408714 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-05 01:07:04.408719 | orchestrator | Thursday 05 February 2026 01:04:06 +0000 (0:00:02.917) 0:00:12.559 ***** 2026-02-05 01:07:04.408729 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:07:04.408734 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-05 01:07:04.408739 | orchestrator | 2026-02-05 01:07:04.408743 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-05 01:07:04.408747 | orchestrator | Thursday 05 February 2026 01:04:09 +0000 (0:00:03.639) 0:00:16.199 ***** 2026-02-05 01:07:04.408752 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:07:04.408757 | orchestrator | 2026-02-05 01:07:04.408769 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-05 01:07:04.408773 | orchestrator | Thursday 05 February 2026 01:04:13 +0000 (0:00:03.259) 0:00:19.458 ***** 2026-02-05 01:07:04.408779 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-05 01:07:04.408783 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-05 01:07:04.408788 | orchestrator | 2026-02-05 01:07:04.408792 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-05 01:07:04.408797 | orchestrator | Thursday 05 February 2026 01:04:21 +0000 (0:00:08.553) 0:00:28.012 ***** 2026-02-05 01:07:04.408804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.408822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.408828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.408839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.408849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.408855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.408860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.408871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.408874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.408881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.408886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.408890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.408893 | orchestrator | 2026-02-05 01:07:04.408896 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 01:07:04.408900 | orchestrator | Thursday 05 February 2026 01:04:24 +0000 (0:00:02.900) 0:00:30.912 ***** 2026-02-05 01:07:04.408903 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:04.408906 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:04.408909 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:04.408913 | orchestrator | 2026-02-05 01:07:04.408916 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 01:07:04.408919 | orchestrator | Thursday 05 February 2026 01:04:25 +0000 (0:00:00.434) 0:00:31.346 ***** 2026-02-05 01:07:04.408922 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:07:04.408985 | orchestrator | 2026-02-05 01:07:04.408995 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-05 01:07:04.409001 | orchestrator | Thursday 05 February 2026 01:04:25 +0000 (0:00:00.641) 0:00:31.988 ***** 2026-02-05 01:07:04.409007 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-05 01:07:04.409013 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-05 01:07:04.409026 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-05 01:07:04.409031 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-05 01:07:04.409043 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-05 01:07:04.409049 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-05 01:07:04.409055 | orchestrator | 2026-02-05 01:07:04.409060 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-05 01:07:04.409065 | orchestrator | Thursday 05 February 2026 01:04:27 +0000 (0:00:01.741) 0:00:33.729 ***** 2026-02-05 01:07:04.409071 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 01:07:04.409077 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 01:07:04.409086 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 01:07:04.409093 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 01:07:04.409103 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 01:07:04.409112 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 01:07:04.409117 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 01:07:04.409124 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 01:07:04.409128 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 01:07:04.409136 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 01:07:04.409153 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 01:07:04.409159 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 01:07:04.409165 | orchestrator | 2026-02-05 01:07:04.409171 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-05 01:07:04.409175 | orchestrator | Thursday 05 February 2026 01:04:31 +0000 (0:00:03.840) 0:00:37.569 ***** 2026-02-05 01:07:04.409181 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:07:04.409187 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:07:04.409194 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:07:04.409200 | orchestrator | 2026-02-05 01:07:04.409206 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-05 01:07:04.409211 | orchestrator | Thursday 05 February 2026 01:04:33 +0000 (0:00:01.900) 0:00:39.470 ***** 2026-02-05 01:07:04.409218 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-05 01:07:04.409222 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-05 01:07:04.409226 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-05 01:07:04.409230 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 01:07:04.409234 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 01:07:04.409238 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 01:07:04.409242 | orchestrator | 2026-02-05 01:07:04.409245 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-05 01:07:04.409249 | orchestrator | Thursday 05 February 2026 01:04:36 +0000 (0:00:03.472) 0:00:42.942 ***** 2026-02-05 01:07:04.409253 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-05 01:07:04.409257 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-05 01:07:04.409263 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-05 01:07:04.409267 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-05 01:07:04.409271 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-05 01:07:04.409275 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-05 01:07:04.409279 | orchestrator | 2026-02-05 01:07:04.409283 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-05 01:07:04.409287 | orchestrator | Thursday 05 February 2026 01:04:37 +0000 (0:00:01.001) 0:00:43.944 ***** 2026-02-05 01:07:04.409291 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:04.409294 | orchestrator | 2026-02-05 01:07:04.409298 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-05 01:07:04.409302 | orchestrator | Thursday 05 February 2026 01:04:37 +0000 (0:00:00.107) 0:00:44.051 ***** 2026-02-05 01:07:04.409306 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:04.409310 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:04.409317 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:04.409320 | orchestrator | 2026-02-05 01:07:04.409324 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 01:07:04.409328 | orchestrator | Thursday 05 February 2026 01:04:38 +0000 (0:00:00.238) 0:00:44.290 ***** 2026-02-05 01:07:04.409332 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:07:04.409336 | orchestrator | 2026-02-05 01:07:04.409339 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-05 01:07:04.409343 | orchestrator | Thursday 05 February 2026 01:04:38 +0000 (0:00:00.599) 0:00:44.890 ***** 2026-02-05 01:07:04.409347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.409352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.409374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.409388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409637 | orchestrator | 2026-02-05 01:07:04.409640 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-05 01:07:04.409643 | orchestrator | Thursday 05 February 2026 01:04:42 +0000 (0:00:04.114) 0:00:49.005 ***** 2026-02-05 01:07:04.409647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:04.409654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409671 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:04.409674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:04.409678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409692 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:04.409695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:04.409700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409712 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:04.409715 | orchestrator | 2026-02-05 01:07:04.409719 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-05 01:07:04.409722 | orchestrator | Thursday 05 February 2026 01:04:43 +0000 (0:00:00.775) 0:00:49.780 ***** 2026-02-05 01:07:04.409727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:04.409731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409743 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:04.409747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:04.409754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409766 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:04.409769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:04.409772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409789 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:04.409793 | orchestrator | 2026-02-05 01:07:04.409796 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-05 01:07:04.409799 | orchestrator | Thursday 05 February 2026 01:04:45 +0000 (0:00:01.520) 0:00:51.301 ***** 2026-02-05 01:07:04.409802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.409808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.409811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.409817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409856 | orchestrator | 2026-02-05 01:07:04.409859 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-05 01:07:04.409862 | orchestrator | Thursday 05 February 2026 01:04:50 +0000 (0:00:05.686) 0:00:56.987 ***** 2026-02-05 01:07:04.409866 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-05 01:07:04.409870 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-05 01:07:04.409874 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-05 01:07:04.409877 | orchestrator | 2026-02-05 01:07:04.409880 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-05 01:07:04.409883 | orchestrator | Thursday 05 February 2026 01:04:52 +0000 (0:00:01.990) 0:00:58.978 ***** 2026-02-05 01:07:04.409886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.409892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.409897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.409901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.409941 | orchestrator | 2026-02-05 01:07:04.409944 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-05 01:07:04.409947 | orchestrator | Thursday 05 February 2026 01:05:06 +0000 (0:00:13.811) 0:01:12.790 ***** 2026-02-05 01:07:04.409950 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:04.409954 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:04.409957 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:04.409960 | orchestrator | 2026-02-05 01:07:04.409963 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-05 01:07:04.409966 | orchestrator | Thursday 05 February 2026 01:05:08 +0000 (0:00:01.932) 0:01:14.722 ***** 2026-02-05 01:07:04.409969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:04.409974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.409993 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:04.409999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:04.410003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.410010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.410056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.410063 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:04.410072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:04.410082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.410088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.410094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:04.410117 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:04.410121 | orchestrator | 2026-02-05 01:07:04.410124 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-05 01:07:04.410127 | orchestrator | Thursday 05 February 2026 01:05:09 +0000 (0:00:00.803) 0:01:15.525 ***** 2026-02-05 01:07:04.410136 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:04.410141 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:04.410148 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:04.410155 | orchestrator | 2026-02-05 01:07:04.410160 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-05 01:07:04.410165 | orchestrator | Thursday 05 February 2026 01:05:09 +0000 (0:00:00.356) 0:01:15.882 ***** 2026-02-05 01:07:04.410170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.410184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.410190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:04.410196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.410204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.410210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.410215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.410226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.410232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.410237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.410245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.410250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:04.410259 | orchestrator | 2026-02-05 01:07:04.410265 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 01:07:04.410270 | orchestrator | Thursday 05 February 2026 01:05:13 +0000 (0:00:03.456) 0:01:19.338 ***** 2026-02-05 01:07:04.410275 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:04.410280 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:04.410286 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:04.410291 | orchestrator | 2026-02-05 01:07:04.410297 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-05 01:07:04.410302 | orchestrator | Thursday 05 February 2026 01:05:13 +0000 (0:00:00.554) 0:01:19.893 ***** 2026-02-05 01:07:04.410308 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:04.410313 | orchestrator | 2026-02-05 01:07:04.410318 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-05 01:07:04.410324 | orchestrator | Thursday 05 February 2026 01:05:15 +0000 (0:00:02.177) 0:01:22.070 ***** 2026-02-05 01:07:04.410329 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:04.410335 | orchestrator | 2026-02-05 01:07:04.410340 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-05 01:07:04.410348 | orchestrator | Thursday 05 February 2026 01:05:18 +0000 (0:00:02.171) 0:01:24.241 ***** 2026-02-05 01:07:04.410353 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:04.410375 | orchestrator | 2026-02-05 01:07:04.410381 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-05 01:07:04.410387 | orchestrator | Thursday 05 February 2026 01:05:38 +0000 (0:00:20.957) 0:01:45.199 ***** 2026-02-05 01:07:04.410392 | orchestrator | 2026-02-05 01:07:04.410397 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-05 01:07:04.410403 | orchestrator | Thursday 05 February 2026 01:05:39 +0000 (0:00:00.059) 0:01:45.258 ***** 2026-02-05 01:07:04.410408 | orchestrator | 2026-02-05 01:07:04.410413 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-05 01:07:04.410419 | orchestrator | Thursday 05 February 2026 01:05:39 +0000 (0:00:00.061) 0:01:45.320 ***** 2026-02-05 01:07:04.410424 | orchestrator | 2026-02-05 01:07:04.410429 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-05 01:07:04.410434 | orchestrator | Thursday 05 February 2026 01:05:39 +0000 (0:00:00.064) 0:01:45.384 ***** 2026-02-05 01:07:04.410440 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:04.410445 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:04.410451 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:04.410456 | orchestrator | 2026-02-05 01:07:04.410461 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-05 01:07:04.410466 | orchestrator | Thursday 05 February 2026 01:06:11 +0000 (0:00:32.784) 0:02:18.168 ***** 2026-02-05 01:07:04.410471 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:04.410477 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:04.410482 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:04.410487 | orchestrator | 2026-02-05 01:07:04.410492 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-05 01:07:04.410498 | orchestrator | Thursday 05 February 2026 01:06:22 +0000 (0:00:10.899) 0:02:29.068 ***** 2026-02-05 01:07:04.410503 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:04.410509 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:04.410514 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:04.410519 | orchestrator | 2026-02-05 01:07:04.410525 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-05 01:07:04.410530 | orchestrator | Thursday 05 February 2026 01:06:49 +0000 (0:00:26.898) 0:02:55.966 ***** 2026-02-05 01:07:04.410535 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:04.410541 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:04.410546 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:04.410556 | orchestrator | 2026-02-05 01:07:04.410562 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-05 01:07:04.410567 | orchestrator | Thursday 05 February 2026 01:07:01 +0000 (0:00:11.471) 0:03:07.438 ***** 2026-02-05 01:07:04.410573 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:04.410579 | orchestrator | 2026-02-05 01:07:04.410584 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:07:04.410590 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-05 01:07:04.410596 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:07:04.410604 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:07:04.410610 | orchestrator | 2026-02-05 01:07:04.410615 | orchestrator | 2026-02-05 01:07:04.410620 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:07:04.410624 | orchestrator | Thursday 05 February 2026 01:07:01 +0000 (0:00:00.241) 0:03:07.679 ***** 2026-02-05 01:07:04.410627 | orchestrator | =============================================================================== 2026-02-05 01:07:04.410630 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 32.78s 2026-02-05 01:07:04.410633 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.90s 2026-02-05 01:07:04.410637 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.96s 2026-02-05 01:07:04.410640 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.81s 2026-02-05 01:07:04.410643 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.47s 2026-02-05 01:07:04.410646 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.90s 2026-02-05 01:07:04.410649 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.55s 2026-02-05 01:07:04.410652 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.69s 2026-02-05 01:07:04.410655 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.40s 2026-02-05 01:07:04.410658 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.11s 2026-02-05 01:07:04.410661 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.84s 2026-02-05 01:07:04.410664 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.64s 2026-02-05 01:07:04.410667 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.47s 2026-02-05 01:07:04.410670 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.46s 2026-02-05 01:07:04.410673 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.26s 2026-02-05 01:07:04.410676 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.92s 2026-02-05 01:07:04.410679 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.90s 2026-02-05 01:07:04.410685 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.88s 2026-02-05 01:07:04.410688 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.18s 2026-02-05 01:07:04.410691 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.17s 2026-02-05 01:07:04.410738 | orchestrator | 2026-02-05 01:07:04 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:04.413393 | orchestrator | 2026-02-05 01:07:04 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:04.413436 | orchestrator | 2026-02-05 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:07.453162 | orchestrator | 2026-02-05 01:07:07 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:07.455021 | orchestrator | 2026-02-05 01:07:07 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:07.457071 | orchestrator | 2026-02-05 01:07:07 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:07.457111 | orchestrator | 2026-02-05 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:10.496072 | orchestrator | 2026-02-05 01:07:10 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:10.499704 | orchestrator | 2026-02-05 01:07:10 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:10.503965 | orchestrator | 2026-02-05 01:07:10 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:10.504011 | orchestrator | 2026-02-05 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:13.546326 | orchestrator | 2026-02-05 01:07:13 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:13.549614 | orchestrator | 2026-02-05 01:07:13 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:13.550791 | orchestrator | 2026-02-05 01:07:13 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:13.550820 | orchestrator | 2026-02-05 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:16.587027 | orchestrator | 2026-02-05 01:07:16 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:16.589049 | orchestrator | 2026-02-05 01:07:16 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:16.591421 | orchestrator | 2026-02-05 01:07:16 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:16.591876 | orchestrator | 2026-02-05 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:19.631033 | orchestrator | 2026-02-05 01:07:19 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:19.632436 | orchestrator | 2026-02-05 01:07:19 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:19.633449 | orchestrator | 2026-02-05 01:07:19 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:19.633478 | orchestrator | 2026-02-05 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:22.676656 | orchestrator | 2026-02-05 01:07:22 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:22.678448 | orchestrator | 2026-02-05 01:07:22 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:22.679513 | orchestrator | 2026-02-05 01:07:22 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:22.679603 | orchestrator | 2026-02-05 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:25.719919 | orchestrator | 2026-02-05 01:07:25 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:25.720959 | orchestrator | 2026-02-05 01:07:25 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:25.722450 | orchestrator | 2026-02-05 01:07:25 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:25.722493 | orchestrator | 2026-02-05 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:28.768044 | orchestrator | 2026-02-05 01:07:28 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:28.770555 | orchestrator | 2026-02-05 01:07:28 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:28.773086 | orchestrator | 2026-02-05 01:07:28 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:28.773126 | orchestrator | 2026-02-05 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:31.813819 | orchestrator | 2026-02-05 01:07:31 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:31.815981 | orchestrator | 2026-02-05 01:07:31 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:31.817595 | orchestrator | 2026-02-05 01:07:31 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:31.817762 | orchestrator | 2026-02-05 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:34.863390 | orchestrator | 2026-02-05 01:07:34 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:34.865980 | orchestrator | 2026-02-05 01:07:34 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:34.867880 | orchestrator | 2026-02-05 01:07:34 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:34.867933 | orchestrator | 2026-02-05 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:37.906257 | orchestrator | 2026-02-05 01:07:37 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:37.908775 | orchestrator | 2026-02-05 01:07:37 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:37.910841 | orchestrator | 2026-02-05 01:07:37 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:37.910876 | orchestrator | 2026-02-05 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:40.950600 | orchestrator | 2026-02-05 01:07:40 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:40.952389 | orchestrator | 2026-02-05 01:07:40 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:40.954135 | orchestrator | 2026-02-05 01:07:40 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:40.954186 | orchestrator | 2026-02-05 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:44.003518 | orchestrator | 2026-02-05 01:07:44 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:44.005874 | orchestrator | 2026-02-05 01:07:44 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:44.007914 | orchestrator | 2026-02-05 01:07:44 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:44.007969 | orchestrator | 2026-02-05 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:47.051636 | orchestrator | 2026-02-05 01:07:47 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:47.053978 | orchestrator | 2026-02-05 01:07:47 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:47.056128 | orchestrator | 2026-02-05 01:07:47 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:47.056153 | orchestrator | 2026-02-05 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:50.094464 | orchestrator | 2026-02-05 01:07:50 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:50.095933 | orchestrator | 2026-02-05 01:07:50 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:50.097233 | orchestrator | 2026-02-05 01:07:50 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:50.097275 | orchestrator | 2026-02-05 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:53.138561 | orchestrator | 2026-02-05 01:07:53 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:53.141865 | orchestrator | 2026-02-05 01:07:53 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:53.142563 | orchestrator | 2026-02-05 01:07:53 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:53.142595 | orchestrator | 2026-02-05 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:56.181746 | orchestrator | 2026-02-05 01:07:56 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:56.183537 | orchestrator | 2026-02-05 01:07:56 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:56.187241 | orchestrator | 2026-02-05 01:07:56 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:56.187290 | orchestrator | 2026-02-05 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:59.225289 | orchestrator | 2026-02-05 01:07:59 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:07:59.227237 | orchestrator | 2026-02-05 01:07:59 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:07:59.228465 | orchestrator | 2026-02-05 01:07:59 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:07:59.228545 | orchestrator | 2026-02-05 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:02.275662 | orchestrator | 2026-02-05 01:08:02 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:02.277720 | orchestrator | 2026-02-05 01:08:02 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:02.279134 | orchestrator | 2026-02-05 01:08:02 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:02.279191 | orchestrator | 2026-02-05 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:05.324600 | orchestrator | 2026-02-05 01:08:05 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:05.327151 | orchestrator | 2026-02-05 01:08:05 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:05.329683 | orchestrator | 2026-02-05 01:08:05 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:05.329725 | orchestrator | 2026-02-05 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:08.367462 | orchestrator | 2026-02-05 01:08:08 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:08.367883 | orchestrator | 2026-02-05 01:08:08 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:08.368867 | orchestrator | 2026-02-05 01:08:08 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:08.368897 | orchestrator | 2026-02-05 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:11.402708 | orchestrator | 2026-02-05 01:08:11 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:11.402946 | orchestrator | 2026-02-05 01:08:11 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:11.404819 | orchestrator | 2026-02-05 01:08:11 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:11.404894 | orchestrator | 2026-02-05 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:14.438990 | orchestrator | 2026-02-05 01:08:14 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:14.439779 | orchestrator | 2026-02-05 01:08:14 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:14.440957 | orchestrator | 2026-02-05 01:08:14 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:14.441222 | orchestrator | 2026-02-05 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:17.480004 | orchestrator | 2026-02-05 01:08:17 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:17.482187 | orchestrator | 2026-02-05 01:08:17 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:17.484280 | orchestrator | 2026-02-05 01:08:17 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:17.484349 | orchestrator | 2026-02-05 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:20.531247 | orchestrator | 2026-02-05 01:08:20 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:20.532744 | orchestrator | 2026-02-05 01:08:20 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:20.533789 | orchestrator | 2026-02-05 01:08:20 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:20.533824 | orchestrator | 2026-02-05 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:23.577772 | orchestrator | 2026-02-05 01:08:23 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:23.580015 | orchestrator | 2026-02-05 01:08:23 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:23.583290 | orchestrator | 2026-02-05 01:08:23 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:23.583383 | orchestrator | 2026-02-05 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:26.624005 | orchestrator | 2026-02-05 01:08:26 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:26.625572 | orchestrator | 2026-02-05 01:08:26 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:26.626733 | orchestrator | 2026-02-05 01:08:26 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:26.626764 | orchestrator | 2026-02-05 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:29.669606 | orchestrator | 2026-02-05 01:08:29 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:29.672871 | orchestrator | 2026-02-05 01:08:29 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:29.674606 | orchestrator | 2026-02-05 01:08:29 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:29.674966 | orchestrator | 2026-02-05 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:32.717573 | orchestrator | 2026-02-05 01:08:32 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:32.718008 | orchestrator | 2026-02-05 01:08:32 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:32.719658 | orchestrator | 2026-02-05 01:08:32 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:32.719790 | orchestrator | 2026-02-05 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:35.765261 | orchestrator | 2026-02-05 01:08:35 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:35.767075 | orchestrator | 2026-02-05 01:08:35 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:35.769227 | orchestrator | 2026-02-05 01:08:35 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:35.769583 | orchestrator | 2026-02-05 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:38.805112 | orchestrator | 2026-02-05 01:08:38 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:38.806260 | orchestrator | 2026-02-05 01:08:38 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:38.807996 | orchestrator | 2026-02-05 01:08:38 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:38.808020 | orchestrator | 2026-02-05 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:41.844586 | orchestrator | 2026-02-05 01:08:41 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:41.846733 | orchestrator | 2026-02-05 01:08:41 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:41.848046 | orchestrator | 2026-02-05 01:08:41 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:41.848092 | orchestrator | 2026-02-05 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:44.889831 | orchestrator | 2026-02-05 01:08:44 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:44.889924 | orchestrator | 2026-02-05 01:08:44 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:44.890255 | orchestrator | 2026-02-05 01:08:44 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:44.890346 | orchestrator | 2026-02-05 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:47.934477 | orchestrator | 2026-02-05 01:08:47 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state STARTED 2026-02-05 01:08:47.936394 | orchestrator | 2026-02-05 01:08:47 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:47.938990 | orchestrator | 2026-02-05 01:08:47 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:47.939046 | orchestrator | 2026-02-05 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:50.980669 | orchestrator | 2026-02-05 01:08:50 | INFO  | Task a87aa8ca-1df8-46f3-b848-b23ab0ee671d is in state SUCCESS 2026-02-05 01:08:50.981815 | orchestrator | 2026-02-05 01:08:50.981861 | orchestrator | 2026-02-05 01:08:50.981871 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:08:50.981879 | orchestrator | 2026-02-05 01:08:50.981887 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:08:50.981894 | orchestrator | Thursday 05 February 2026 01:06:23 +0000 (0:00:00.288) 0:00:00.288 ***** 2026-02-05 01:08:50.981901 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:08:50.981908 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:08:50.981915 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:08:50.981922 | orchestrator | 2026-02-05 01:08:50.981960 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:08:50.981967 | orchestrator | Thursday 05 February 2026 01:06:24 +0000 (0:00:00.355) 0:00:00.644 ***** 2026-02-05 01:08:50.981973 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-05 01:08:50.982043 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-05 01:08:50.982054 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-05 01:08:50.982061 | orchestrator | 2026-02-05 01:08:50.982068 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-05 01:08:50.982075 | orchestrator | 2026-02-05 01:08:50.982082 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-05 01:08:50.982089 | orchestrator | Thursday 05 February 2026 01:06:24 +0000 (0:00:00.528) 0:00:01.173 ***** 2026-02-05 01:08:50.982096 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:08:50.982104 | orchestrator | 2026-02-05 01:08:50.982110 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-05 01:08:50.982117 | orchestrator | Thursday 05 February 2026 01:06:25 +0000 (0:00:00.575) 0:00:01.748 ***** 2026-02-05 01:08:50.982126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.982136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.982152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.982159 | orchestrator | 2026-02-05 01:08:50.982166 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-05 01:08:50.982173 | orchestrator | Thursday 05 February 2026 01:06:26 +0000 (0:00:00.950) 0:00:02.699 ***** 2026-02-05 01:08:50.982188 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-05 01:08:50.982201 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-05 01:08:50.982208 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:08:50.982291 | orchestrator | 2026-02-05 01:08:50.982299 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-05 01:08:50.982307 | orchestrator | Thursday 05 February 2026 01:06:27 +0000 (0:00:00.884) 0:00:03.584 ***** 2026-02-05 01:08:50.982314 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:08:50.982321 | orchestrator | 2026-02-05 01:08:50.982328 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-05 01:08:50.982342 | orchestrator | Thursday 05 February 2026 01:06:27 +0000 (0:00:00.658) 0:00:04.242 ***** 2026-02-05 01:08:50.982360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.982368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.982374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.982380 | orchestrator | 2026-02-05 01:08:50.982386 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-05 01:08:50.982393 | orchestrator | Thursday 05 February 2026 01:06:28 +0000 (0:00:01.270) 0:00:05.513 ***** 2026-02-05 01:08:50.982404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 01:08:50.982411 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:08:50.982418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 01:08:50.982429 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:08:50.982442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 01:08:50.982449 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:08:50.982456 | orchestrator | 2026-02-05 01:08:50.982463 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-05 01:08:50.982469 | orchestrator | Thursday 05 February 2026 01:06:29 +0000 (0:00:00.322) 0:00:05.836 ***** 2026-02-05 01:08:50.982477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 01:08:50.982484 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:08:50.982491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 01:08:50.982498 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:08:50.982505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 01:08:50.982511 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:08:50.982519 | orchestrator | 2026-02-05 01:08:50.982526 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-05 01:08:50.982533 | orchestrator | Thursday 05 February 2026 01:06:29 +0000 (0:00:00.602) 0:00:06.438 ***** 2026-02-05 01:08:50.982539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.982554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.982561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.982568 | orchestrator | 2026-02-05 01:08:50.982575 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-05 01:08:50.982581 | orchestrator | Thursday 05 February 2026 01:06:31 +0000 (0:00:01.198) 0:00:07.637 ***** 2026-02-05 01:08:50.982588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.982595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.982609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.982619 | orchestrator | 2026-02-05 01:08:50.982626 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-05 01:08:50.982633 | orchestrator | Thursday 05 February 2026 01:06:32 +0000 (0:00:01.234) 0:00:08.871 ***** 2026-02-05 01:08:50.982661 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:08:50.982668 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:08:50.982675 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:08:50.982682 | orchestrator | 2026-02-05 01:08:50.982689 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-05 01:08:50.982696 | orchestrator | Thursday 05 February 2026 01:06:32 +0000 (0:00:00.365) 0:00:09.237 ***** 2026-02-05 01:08:50.982723 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-05 01:08:50.982730 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-05 01:08:50.982737 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-05 01:08:50.982745 | orchestrator | 2026-02-05 01:08:50.982752 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-05 01:08:50.982759 | orchestrator | Thursday 05 February 2026 01:06:33 +0000 (0:00:01.158) 0:00:10.396 ***** 2026-02-05 01:08:50.982767 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-05 01:08:50.982778 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-05 01:08:50.982786 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-05 01:08:50.982792 | orchestrator | 2026-02-05 01:08:50.982799 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-05 01:08:50.982806 | orchestrator | Thursday 05 February 2026 01:06:35 +0000 (0:00:01.258) 0:00:11.655 ***** 2026-02-05 01:08:50.982813 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:08:50.982820 | orchestrator | 2026-02-05 01:08:50.982826 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-05 01:08:50.982833 | orchestrator | Thursday 05 February 2026 01:06:35 +0000 (0:00:00.728) 0:00:12.383 ***** 2026-02-05 01:08:50.982841 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-05 01:08:50.982848 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-05 01:08:50.982886 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:08:50.982893 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:08:50.982900 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:08:50.982907 | orchestrator | 2026-02-05 01:08:50.982914 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-05 01:08:50.982927 | orchestrator | Thursday 05 February 2026 01:06:36 +0000 (0:00:00.673) 0:00:13.056 ***** 2026-02-05 01:08:50.982934 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:08:50.982941 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:08:50.982948 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:08:50.982955 | orchestrator | 2026-02-05 01:08:50.982962 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-05 01:08:50.982969 | orchestrator | Thursday 05 February 2026 01:06:36 +0000 (0:00:00.469) 0:00:13.526 ***** 2026-02-05 01:08:50.982976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083698, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3259647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.982989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083698, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3259647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.982999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083698, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3259647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1083751, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3376544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1083751, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3376544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1083751, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3376544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083714, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3286874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083714, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3286874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083714, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3286874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1083753, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3387609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1083753, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3387609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1083753, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3387609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1083730, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3316543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1083730, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3316543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1083730, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3316543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1083747, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3367043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1083747, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3367043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1083747, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3367043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083697, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3246665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083697, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3246665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083697, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3246665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083704, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.326899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083704, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.326899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083704, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.326899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083717, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3291876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083717, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3291876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083717, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3291876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1083734, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3326542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1083734, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3326542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1083734, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3326542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1083750, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3375046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1083750, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3375046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1083750, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3375046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1083709, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3280785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1083709, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3280785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1083709, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3280785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1083742, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3356543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1083742, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3356543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1083742, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3356543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1083731, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3326542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1083731, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3326542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1083731, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3326542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1083727, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3316543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1083727, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3316543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1083723, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.331199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1083727, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3316543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1083723, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.331199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1083737, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3344433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1083723, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.331199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1083737, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3344433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1083718, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3296542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1083737, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3344433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1083718, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3296542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1083749, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3367043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1083718, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3296542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1083877, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3663526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1083749, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3367043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1083780, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3480973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1083877, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3663526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1083749, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3367043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1083768, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3420258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1083780, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3480973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1083877, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3663526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1083797, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3496718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1083768, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3420258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1083780, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3480973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1083761, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3398702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1083797, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3496718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1083837, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3586547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1083768, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3420258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1083761, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3398702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1083799, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3556874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1083797, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3496718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1083842, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3601964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1083837, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3586547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1083761, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3398702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1083872, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3636546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.983998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1083799, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3556874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1083837, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3586547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1083834, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3576546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1083842, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3601964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1083799, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3556874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1083794, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3486545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1083872, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3636546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1083842, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3601964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1083778, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3446543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1083834, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3576546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1083791, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3482559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1083872, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3636546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1083770, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3436544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1083794, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3486545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1083834, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3576546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1083795, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3492913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1083778, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3446543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1083794, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3486545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1083858, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3636546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1083791, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3482559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1083778, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3446543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1083849, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3616626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1083770, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3436544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1083791, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3482559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1083762, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3401656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1083795, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3492913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1083770, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3436544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1083764, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3407109, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1083829, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3576546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1083858, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3636546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1083795, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3492913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1083847, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.360423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1083849, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3616626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1083858, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3636546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1083762, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3401656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1083849, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3616626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1083764, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3407109, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1083762, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3401656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1083829, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3576546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1083764, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3407109, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1083847, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.360423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1083829, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.3576546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1083847, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770250628.360423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:08:50.984449 | orchestrator | 2026-02-05 01:08:50.984457 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-05 01:08:50.984465 | orchestrator | Thursday 05 February 2026 01:07:14 +0000 (0:00:37.264) 0:00:50.790 ***** 2026-02-05 01:08:50.984473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.984488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.984495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:08:50.984502 | orchestrator | 2026-02-05 01:08:50.984508 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-05 01:08:50.984514 | orchestrator | Thursday 05 February 2026 01:07:15 +0000 (0:00:00.993) 0:00:51.784 ***** 2026-02-05 01:08:50.984520 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:08:50.984527 | orchestrator | 2026-02-05 01:08:50.984534 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-05 01:08:50.984571 | orchestrator | Thursday 05 February 2026 01:07:17 +0000 (0:00:01.986) 0:00:53.770 ***** 2026-02-05 01:08:50.984578 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:08:50.984585 | orchestrator | 2026-02-05 01:08:50.984591 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-05 01:08:50.984597 | orchestrator | Thursday 05 February 2026 01:07:19 +0000 (0:00:02.114) 0:00:55.884 ***** 2026-02-05 01:08:50.984604 | orchestrator | 2026-02-05 01:08:50.984611 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-05 01:08:50.984617 | orchestrator | Thursday 05 February 2026 01:07:19 +0000 (0:00:00.060) 0:00:55.945 ***** 2026-02-05 01:08:50.984624 | orchestrator | 2026-02-05 01:08:50.984631 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-05 01:08:50.984637 | orchestrator | Thursday 05 February 2026 01:07:19 +0000 (0:00:00.062) 0:00:56.007 ***** 2026-02-05 01:08:50.984644 | orchestrator | 2026-02-05 01:08:50.984651 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-05 01:08:50.984658 | orchestrator | Thursday 05 February 2026 01:07:19 +0000 (0:00:00.161) 0:00:56.169 ***** 2026-02-05 01:08:50.984664 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:08:50.984671 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:08:50.984678 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:08:50.984684 | orchestrator | 2026-02-05 01:08:50.984691 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-05 01:08:50.984697 | orchestrator | Thursday 05 February 2026 01:07:21 +0000 (0:00:01.807) 0:00:57.977 ***** 2026-02-05 01:08:50.984704 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:08:50.984711 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:08:50.984717 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-05 01:08:50.984724 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-05 01:08:50.984730 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-05 01:08:50.984742 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-05 01:08:50.984749 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:08:50.984756 | orchestrator | 2026-02-05 01:08:50.984763 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-05 01:08:50.984770 | orchestrator | Thursday 05 February 2026 01:08:11 +0000 (0:00:49.868) 0:01:47.846 ***** 2026-02-05 01:08:50.984777 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:08:50.984784 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:08:50.984791 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:08:50.984797 | orchestrator | 2026-02-05 01:08:50.984804 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-05 01:08:50.984810 | orchestrator | Thursday 05 February 2026 01:08:43 +0000 (0:00:32.103) 0:02:19.949 ***** 2026-02-05 01:08:50.984816 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:08:50.984822 | orchestrator | 2026-02-05 01:08:50.984829 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-05 01:08:50.984836 | orchestrator | Thursday 05 February 2026 01:08:45 +0000 (0:00:02.602) 0:02:22.551 ***** 2026-02-05 01:08:50.984842 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:08:50.984849 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:08:50.984871 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:08:50.984878 | orchestrator | 2026-02-05 01:08:50.984885 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-05 01:08:50.984892 | orchestrator | Thursday 05 February 2026 01:08:46 +0000 (0:00:00.382) 0:02:22.933 ***** 2026-02-05 01:08:50.984903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-05 01:08:50.984911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-05 01:08:50.984918 | orchestrator | 2026-02-05 01:08:50.984925 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-05 01:08:50.984932 | orchestrator | Thursday 05 February 2026 01:08:48 +0000 (0:00:02.377) 0:02:25.311 ***** 2026-02-05 01:08:50.984938 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:08:50.984945 | orchestrator | 2026-02-05 01:08:50.984952 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:08:50.984959 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:08:50.984967 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:08:50.984973 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:08:50.984980 | orchestrator | 2026-02-05 01:08:50.984987 | orchestrator | 2026-02-05 01:08:50.984993 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:08:50.985000 | orchestrator | Thursday 05 February 2026 01:08:49 +0000 (0:00:00.252) 0:02:25.563 ***** 2026-02-05 01:08:50.985010 | orchestrator | =============================================================================== 2026-02-05 01:08:50.985017 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 49.87s 2026-02-05 01:08:50.985024 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.26s 2026-02-05 01:08:50.985037 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.10s 2026-02-05 01:08:50.985043 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.60s 2026-02-05 01:08:50.985050 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.38s 2026-02-05 01:08:50.985056 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.11s 2026-02-05 01:08:50.985063 | orchestrator | grafana : Creating grafana database ------------------------------------- 1.99s 2026-02-05 01:08:50.985069 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.81s 2026-02-05 01:08:50.985076 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.27s 2026-02-05 01:08:50.985083 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.26s 2026-02-05 01:08:50.985089 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.23s 2026-02-05 01:08:50.985096 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.20s 2026-02-05 01:08:50.985103 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.16s 2026-02-05 01:08:50.985109 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.99s 2026-02-05 01:08:50.985115 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.95s 2026-02-05 01:08:50.985122 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.88s 2026-02-05 01:08:50.985129 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.73s 2026-02-05 01:08:50.985136 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.67s 2026-02-05 01:08:50.985142 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.66s 2026-02-05 01:08:50.985149 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.60s 2026-02-05 01:08:50.985156 | orchestrator | 2026-02-05 01:08:50 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:50.985222 | orchestrator | 2026-02-05 01:08:50 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:50.985230 | orchestrator | 2026-02-05 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:54.020674 | orchestrator | 2026-02-05 01:08:54 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:54.021840 | orchestrator | 2026-02-05 01:08:54 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:54.021975 | orchestrator | 2026-02-05 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:57.060408 | orchestrator | 2026-02-05 01:08:57 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:08:57.061735 | orchestrator | 2026-02-05 01:08:57 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:08:57.061939 | orchestrator | 2026-02-05 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:00.104718 | orchestrator | 2026-02-05 01:09:00 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:09:00.105441 | orchestrator | 2026-02-05 01:09:00 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:00.105472 | orchestrator | 2026-02-05 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:03.146475 | orchestrator | 2026-02-05 01:09:03 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:09:03.149017 | orchestrator | 2026-02-05 01:09:03 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:03.149150 | orchestrator | 2026-02-05 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:06.196641 | orchestrator | 2026-02-05 01:09:06 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:09:06.198817 | orchestrator | 2026-02-05 01:09:06 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:06.198899 | orchestrator | 2026-02-05 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:09.239335 | orchestrator | 2026-02-05 01:09:09 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:09:09.241403 | orchestrator | 2026-02-05 01:09:09 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:09.241448 | orchestrator | 2026-02-05 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:12.284662 | orchestrator | 2026-02-05 01:09:12 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:09:12.286389 | orchestrator | 2026-02-05 01:09:12 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:12.286444 | orchestrator | 2026-02-05 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:15.327195 | orchestrator | 2026-02-05 01:09:15 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:09:15.327908 | orchestrator | 2026-02-05 01:09:15 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:15.327928 | orchestrator | 2026-02-05 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:18.374058 | orchestrator | 2026-02-05 01:09:18 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:09:18.375403 | orchestrator | 2026-02-05 01:09:18 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:18.375703 | orchestrator | 2026-02-05 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:21.405654 | orchestrator | 2026-02-05 01:09:21 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:09:21.407662 | orchestrator | 2026-02-05 01:09:21 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:21.407697 | orchestrator | 2026-02-05 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:24.451552 | orchestrator | 2026-02-05 01:09:24 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:09:24.453006 | orchestrator | 2026-02-05 01:09:24 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:24.453263 | orchestrator | 2026-02-05 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:27.496356 | orchestrator | 2026-02-05 01:09:27 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state STARTED 2026-02-05 01:09:27.497224 | orchestrator | 2026-02-05 01:09:27 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:27.497302 | orchestrator | 2026-02-05 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:30.549293 | orchestrator | 2026-02-05 01:09:30 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:09:30.551267 | orchestrator | 2026-02-05 01:09:30 | INFO  | Task 217d967f-27ce-49aa-bcfb-052a1a172d1b is in state SUCCESS 2026-02-05 01:09:30.556335 | orchestrator | 2026-02-05 01:09:30 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:30.556374 | orchestrator | 2026-02-05 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:33.596373 | orchestrator | 2026-02-05 01:09:33 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:09:33.598960 | orchestrator | 2026-02-05 01:09:33 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:33.599537 | orchestrator | 2026-02-05 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:36.647699 | orchestrator | 2026-02-05 01:09:36 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:09:36.648949 | orchestrator | 2026-02-05 01:09:36 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:36.649432 | orchestrator | 2026-02-05 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:39.692764 | orchestrator | 2026-02-05 01:09:39 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:09:39.692809 | orchestrator | 2026-02-05 01:09:39 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:39.692815 | orchestrator | 2026-02-05 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:42.732026 | orchestrator | 2026-02-05 01:09:42 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:09:42.735530 | orchestrator | 2026-02-05 01:09:42 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:42.735824 | orchestrator | 2026-02-05 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:45.780177 | orchestrator | 2026-02-05 01:09:45 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:09:45.781409 | orchestrator | 2026-02-05 01:09:45 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:45.781455 | orchestrator | 2026-02-05 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:48.815955 | orchestrator | 2026-02-05 01:09:48 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:09:48.816041 | orchestrator | 2026-02-05 01:09:48 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:48.816052 | orchestrator | 2026-02-05 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:51.867015 | orchestrator | 2026-02-05 01:09:51 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:09:51.868989 | orchestrator | 2026-02-05 01:09:51 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:51.869030 | orchestrator | 2026-02-05 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:54.913055 | orchestrator | 2026-02-05 01:09:54 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:09:54.914353 | orchestrator | 2026-02-05 01:09:54 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:54.914387 | orchestrator | 2026-02-05 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:09:57.954789 | orchestrator | 2026-02-05 01:09:57 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:09:57.957183 | orchestrator | 2026-02-05 01:09:57 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:09:57.957436 | orchestrator | 2026-02-05 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:00.996292 | orchestrator | 2026-02-05 01:10:00 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:00.996376 | orchestrator | 2026-02-05 01:10:00 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:00.996387 | orchestrator | 2026-02-05 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:04.041277 | orchestrator | 2026-02-05 01:10:04 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:04.042385 | orchestrator | 2026-02-05 01:10:04 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:04.042445 | orchestrator | 2026-02-05 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:07.077801 | orchestrator | 2026-02-05 01:10:07 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:07.080189 | orchestrator | 2026-02-05 01:10:07 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:07.080301 | orchestrator | 2026-02-05 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:10.119606 | orchestrator | 2026-02-05 01:10:10 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:10.120501 | orchestrator | 2026-02-05 01:10:10 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:10.120857 | orchestrator | 2026-02-05 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:13.160742 | orchestrator | 2026-02-05 01:10:13 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:13.163300 | orchestrator | 2026-02-05 01:10:13 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:13.163347 | orchestrator | 2026-02-05 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:16.209847 | orchestrator | 2026-02-05 01:10:16 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:16.209950 | orchestrator | 2026-02-05 01:10:16 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:16.209965 | orchestrator | 2026-02-05 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:19.246070 | orchestrator | 2026-02-05 01:10:19 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:19.246158 | orchestrator | 2026-02-05 01:10:19 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:19.246169 | orchestrator | 2026-02-05 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:22.281661 | orchestrator | 2026-02-05 01:10:22 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:22.284342 | orchestrator | 2026-02-05 01:10:22 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:22.284422 | orchestrator | 2026-02-05 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:25.308962 | orchestrator | 2026-02-05 01:10:25 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:25.310616 | orchestrator | 2026-02-05 01:10:25 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:25.311696 | orchestrator | 2026-02-05 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:28.347594 | orchestrator | 2026-02-05 01:10:28 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:28.349810 | orchestrator | 2026-02-05 01:10:28 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:28.349854 | orchestrator | 2026-02-05 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:31.388045 | orchestrator | 2026-02-05 01:10:31 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:31.389489 | orchestrator | 2026-02-05 01:10:31 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:31.389611 | orchestrator | 2026-02-05 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:34.424545 | orchestrator | 2026-02-05 01:10:34 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:34.425092 | orchestrator | 2026-02-05 01:10:34 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:34.425141 | orchestrator | 2026-02-05 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:37.467314 | orchestrator | 2026-02-05 01:10:37 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:37.469837 | orchestrator | 2026-02-05 01:10:37 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:37.469909 | orchestrator | 2026-02-05 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:40.504773 | orchestrator | 2026-02-05 01:10:40 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:40.507949 | orchestrator | 2026-02-05 01:10:40 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:40.509145 | orchestrator | 2026-02-05 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:43.534955 | orchestrator | 2026-02-05 01:10:43 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:43.536471 | orchestrator | 2026-02-05 01:10:43 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:43.536522 | orchestrator | 2026-02-05 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:46.566182 | orchestrator | 2026-02-05 01:10:46 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:46.568204 | orchestrator | 2026-02-05 01:10:46 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:46.568256 | orchestrator | 2026-02-05 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:49.609179 | orchestrator | 2026-02-05 01:10:49 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:49.609740 | orchestrator | 2026-02-05 01:10:49 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:49.609775 | orchestrator | 2026-02-05 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:52.655503 | orchestrator | 2026-02-05 01:10:52 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:52.657220 | orchestrator | 2026-02-05 01:10:52 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:52.657268 | orchestrator | 2026-02-05 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:55.703240 | orchestrator | 2026-02-05 01:10:55 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:55.704765 | orchestrator | 2026-02-05 01:10:55 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:55.704817 | orchestrator | 2026-02-05 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:58.743778 | orchestrator | 2026-02-05 01:10:58 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:10:58.745835 | orchestrator | 2026-02-05 01:10:58 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:10:58.745890 | orchestrator | 2026-02-05 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:01.780294 | orchestrator | 2026-02-05 01:11:01 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:01.783007 | orchestrator | 2026-02-05 01:11:01 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:01.783053 | orchestrator | 2026-02-05 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:04.819948 | orchestrator | 2026-02-05 01:11:04 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:04.821444 | orchestrator | 2026-02-05 01:11:04 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:04.821472 | orchestrator | 2026-02-05 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:07.863742 | orchestrator | 2026-02-05 01:11:07 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:07.864933 | orchestrator | 2026-02-05 01:11:07 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:07.865060 | orchestrator | 2026-02-05 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:10.910465 | orchestrator | 2026-02-05 01:11:10 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:10.911791 | orchestrator | 2026-02-05 01:11:10 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:10.912095 | orchestrator | 2026-02-05 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:13.959469 | orchestrator | 2026-02-05 01:11:13 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:13.961479 | orchestrator | 2026-02-05 01:11:13 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:13.961530 | orchestrator | 2026-02-05 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:16.993607 | orchestrator | 2026-02-05 01:11:16 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:16.995793 | orchestrator | 2026-02-05 01:11:16 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:16.995991 | orchestrator | 2026-02-05 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:20.047756 | orchestrator | 2026-02-05 01:11:20 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:20.050174 | orchestrator | 2026-02-05 01:11:20 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:20.050355 | orchestrator | 2026-02-05 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:23.109875 | orchestrator | 2026-02-05 01:11:23 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:23.111461 | orchestrator | 2026-02-05 01:11:23 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:23.111503 | orchestrator | 2026-02-05 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:26.147507 | orchestrator | 2026-02-05 01:11:26 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:26.147557 | orchestrator | 2026-02-05 01:11:26 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:26.147563 | orchestrator | 2026-02-05 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:29.184201 | orchestrator | 2026-02-05 01:11:29 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:29.187298 | orchestrator | 2026-02-05 01:11:29 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:29.187409 | orchestrator | 2026-02-05 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:32.227846 | orchestrator | 2026-02-05 01:11:32 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:32.229649 | orchestrator | 2026-02-05 01:11:32 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:32.229698 | orchestrator | 2026-02-05 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:35.269031 | orchestrator | 2026-02-05 01:11:35 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:35.271453 | orchestrator | 2026-02-05 01:11:35 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:35.271493 | orchestrator | 2026-02-05 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:38.320298 | orchestrator | 2026-02-05 01:11:38 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:38.322168 | orchestrator | 2026-02-05 01:11:38 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:38.322279 | orchestrator | 2026-02-05 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:41.367636 | orchestrator | 2026-02-05 01:11:41 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:41.369895 | orchestrator | 2026-02-05 01:11:41 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:41.369973 | orchestrator | 2026-02-05 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:44.409335 | orchestrator | 2026-02-05 01:11:44 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:44.409931 | orchestrator | 2026-02-05 01:11:44 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:44.410742 | orchestrator | 2026-02-05 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:47.461437 | orchestrator | 2026-02-05 01:11:47 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:47.465142 | orchestrator | 2026-02-05 01:11:47 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:47.465241 | orchestrator | 2026-02-05 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:50.506257 | orchestrator | 2026-02-05 01:11:50 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:50.507842 | orchestrator | 2026-02-05 01:11:50 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:50.507898 | orchestrator | 2026-02-05 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:53.551791 | orchestrator | 2026-02-05 01:11:53 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:53.552927 | orchestrator | 2026-02-05 01:11:53 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:53.553014 | orchestrator | 2026-02-05 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:56.598156 | orchestrator | 2026-02-05 01:11:56 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:56.599306 | orchestrator | 2026-02-05 01:11:56 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:56.599356 | orchestrator | 2026-02-05 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:59.641260 | orchestrator | 2026-02-05 01:11:59 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:11:59.642250 | orchestrator | 2026-02-05 01:11:59 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:11:59.642368 | orchestrator | 2026-02-05 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:02.686129 | orchestrator | 2026-02-05 01:12:02 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:02.688276 | orchestrator | 2026-02-05 01:12:02 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:02.688314 | orchestrator | 2026-02-05 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:05.728838 | orchestrator | 2026-02-05 01:12:05 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:05.729201 | orchestrator | 2026-02-05 01:12:05 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:05.729246 | orchestrator | 2026-02-05 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:08.771753 | orchestrator | 2026-02-05 01:12:08 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:08.771844 | orchestrator | 2026-02-05 01:12:08 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:08.771853 | orchestrator | 2026-02-05 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:11.816788 | orchestrator | 2026-02-05 01:12:11 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:11.817813 | orchestrator | 2026-02-05 01:12:11 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:11.817847 | orchestrator | 2026-02-05 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:14.858662 | orchestrator | 2026-02-05 01:12:14 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:14.859539 | orchestrator | 2026-02-05 01:12:14 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:14.859576 | orchestrator | 2026-02-05 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:17.899737 | orchestrator | 2026-02-05 01:12:17 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:17.901570 | orchestrator | 2026-02-05 01:12:17 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:17.901631 | orchestrator | 2026-02-05 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:20.944379 | orchestrator | 2026-02-05 01:12:20 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:20.946384 | orchestrator | 2026-02-05 01:12:20 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:20.946421 | orchestrator | 2026-02-05 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:23.985737 | orchestrator | 2026-02-05 01:12:23 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:23.987527 | orchestrator | 2026-02-05 01:12:23 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:23.987611 | orchestrator | 2026-02-05 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:27.038849 | orchestrator | 2026-02-05 01:12:27 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:27.039111 | orchestrator | 2026-02-05 01:12:27 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:27.039128 | orchestrator | 2026-02-05 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:30.063236 | orchestrator | 2026-02-05 01:12:30 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:30.070503 | orchestrator | 2026-02-05 01:12:30 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:30.070584 | orchestrator | 2026-02-05 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:33.096086 | orchestrator | 2026-02-05 01:12:33 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:33.097396 | orchestrator | 2026-02-05 01:12:33 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:33.097459 | orchestrator | 2026-02-05 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:36.141106 | orchestrator | 2026-02-05 01:12:36 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:36.144023 | orchestrator | 2026-02-05 01:12:36 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:36.144173 | orchestrator | 2026-02-05 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:39.188376 | orchestrator | 2026-02-05 01:12:39 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:39.189700 | orchestrator | 2026-02-05 01:12:39 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:39.189780 | orchestrator | 2026-02-05 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:42.237906 | orchestrator | 2026-02-05 01:12:42 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:42.240531 | orchestrator | 2026-02-05 01:12:42 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:42.240577 | orchestrator | 2026-02-05 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:45.283846 | orchestrator | 2026-02-05 01:12:45 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:45.284914 | orchestrator | 2026-02-05 01:12:45 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:45.284950 | orchestrator | 2026-02-05 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:48.335705 | orchestrator | 2026-02-05 01:12:48 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:48.337682 | orchestrator | 2026-02-05 01:12:48 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:48.337764 | orchestrator | 2026-02-05 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:51.422486 | orchestrator | 2026-02-05 01:12:51 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:51.422619 | orchestrator | 2026-02-05 01:12:51 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:51.422633 | orchestrator | 2026-02-05 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:54.448926 | orchestrator | 2026-02-05 01:12:54 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:54.449223 | orchestrator | 2026-02-05 01:12:54 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:54.449244 | orchestrator | 2026-02-05 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:57.475957 | orchestrator | 2026-02-05 01:12:57 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:12:57.476200 | orchestrator | 2026-02-05 01:12:57 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:12:57.476249 | orchestrator | 2026-02-05 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:00.500857 | orchestrator | 2026-02-05 01:13:00 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:00.502434 | orchestrator | 2026-02-05 01:13:00 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:13:00.502476 | orchestrator | 2026-02-05 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:03.546134 | orchestrator | 2026-02-05 01:13:03 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:03.548262 | orchestrator | 2026-02-05 01:13:03 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:13:03.548359 | orchestrator | 2026-02-05 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:06.590301 | orchestrator | 2026-02-05 01:13:06 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:06.592073 | orchestrator | 2026-02-05 01:13:06 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:13:06.592258 | orchestrator | 2026-02-05 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:09.635662 | orchestrator | 2026-02-05 01:13:09 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:09.637492 | orchestrator | 2026-02-05 01:13:09 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:13:09.637568 | orchestrator | 2026-02-05 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:12.688002 | orchestrator | 2026-02-05 01:13:12 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:12.689480 | orchestrator | 2026-02-05 01:13:12 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:13:12.689521 | orchestrator | 2026-02-05 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:15.733525 | orchestrator | 2026-02-05 01:13:15 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:15.735745 | orchestrator | 2026-02-05 01:13:15 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:13:15.736352 | orchestrator | 2026-02-05 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:18.778670 | orchestrator | 2026-02-05 01:13:18 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:18.779838 | orchestrator | 2026-02-05 01:13:18 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:13:18.779889 | orchestrator | 2026-02-05 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:21.826445 | orchestrator | 2026-02-05 01:13:21 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:21.828643 | orchestrator | 2026-02-05 01:13:21 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:13:21.828716 | orchestrator | 2026-02-05 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:24.871659 | orchestrator | 2026-02-05 01:13:24 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:24.873816 | orchestrator | 2026-02-05 01:13:24 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:13:24.873857 | orchestrator | 2026-02-05 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:27.917908 | orchestrator | 2026-02-05 01:13:27 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:27.919124 | orchestrator | 2026-02-05 01:13:27 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:13:27.919167 | orchestrator | 2026-02-05 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:30.963096 | orchestrator | 2026-02-05 01:13:30 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:30.963527 | orchestrator | 2026-02-05 01:13:30 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:13:30.963541 | orchestrator | 2026-02-05 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:34.012680 | orchestrator | 2026-02-05 01:13:34 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:34.014368 | orchestrator | 2026-02-05 01:13:34 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state STARTED 2026-02-05 01:13:34.014404 | orchestrator | 2026-02-05 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:37.054965 | orchestrator | 2026-02-05 01:13:37 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:37.058898 | orchestrator | 2026-02-05 01:13:37 | INFO  | Task 02748b3f-e96d-48c0-889d-d5bb24d8fa10 is in state SUCCESS 2026-02-05 01:13:37.061772 | orchestrator | 2026-02-05 01:13:37.061855 | orchestrator | 2026-02-05 01:13:37.061865 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:13:37.061872 | orchestrator | 2026-02-05 01:13:37.061878 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:13:37.061885 | orchestrator | Thursday 05 February 2026 01:05:34 +0000 (0:00:00.156) 0:00:00.156 ***** 2026-02-05 01:13:37.061891 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:37.061899 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:13:37.061905 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:13:37.061911 | orchestrator | 2026-02-05 01:13:37.061919 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:13:37.061925 | orchestrator | Thursday 05 February 2026 01:05:35 +0000 (0:00:00.274) 0:00:00.430 ***** 2026-02-05 01:13:37.061932 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-02-05 01:13:37.061939 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-02-05 01:13:37.061945 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-02-05 01:13:37.061951 | orchestrator | 2026-02-05 01:13:37.061958 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-02-05 01:13:37.061964 | orchestrator | 2026-02-05 01:13:37.061971 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-02-05 01:13:37.061977 | orchestrator | Thursday 05 February 2026 01:05:35 +0000 (0:00:00.559) 0:00:00.990 ***** 2026-02-05 01:13:37.062000 | orchestrator | 2026-02-05 01:13:37.062006 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-02-05 01:13:37.062050 | orchestrator | 2026-02-05 01:13:37.062059 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-02-05 01:13:37.062067 | orchestrator | 2026-02-05 01:13:37.062073 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-02-05 01:13:37.062081 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:37.062089 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:13:37.062097 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:13:37.062104 | orchestrator | 2026-02-05 01:13:37.062112 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:13:37.062121 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:13:37.062130 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:13:37.062137 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:13:37.062144 | orchestrator | 2026-02-05 01:13:37.062151 | orchestrator | 2026-02-05 01:13:37.062173 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:13:37.062202 | orchestrator | Thursday 05 February 2026 01:09:28 +0000 (0:03:52.855) 0:03:53.846 ***** 2026-02-05 01:13:37.062209 | orchestrator | =============================================================================== 2026-02-05 01:13:37.062216 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 232.86s 2026-02-05 01:13:37.062223 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-02-05 01:13:37.062228 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-02-05 01:13:37.062235 | orchestrator | 2026-02-05 01:13:37.062242 | orchestrator | 2026-02-05 01:13:37.062249 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:13:37.062255 | orchestrator | 2026-02-05 01:13:37.062261 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-05 01:13:37.062267 | orchestrator | Thursday 05 February 2026 01:05:20 +0000 (0:00:00.251) 0:00:00.251 ***** 2026-02-05 01:13:37.062273 | orchestrator | changed: [testbed-manager] 2026-02-05 01:13:37.062280 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.062287 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:37.062294 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:37.062342 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:37.062413 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:37.062422 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:37.062429 | orchestrator | 2026-02-05 01:13:37.062435 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:13:37.062442 | orchestrator | Thursday 05 February 2026 01:05:21 +0000 (0:00:01.018) 0:00:01.270 ***** 2026-02-05 01:13:37.062448 | orchestrator | changed: [testbed-manager] 2026-02-05 01:13:37.062510 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.062530 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:37.062537 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:37.062543 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:37.062549 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:37.062555 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:37.062562 | orchestrator | 2026-02-05 01:13:37.062570 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:13:37.062578 | orchestrator | Thursday 05 February 2026 01:05:22 +0000 (0:00:00.790) 0:00:02.061 ***** 2026-02-05 01:13:37.062585 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-05 01:13:37.062593 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-05 01:13:37.062599 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-05 01:13:37.062606 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-05 01:13:37.062612 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-05 01:13:37.062619 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-05 01:13:37.062626 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-05 01:13:37.062633 | orchestrator | 2026-02-05 01:13:37.062640 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-05 01:13:37.062647 | orchestrator | 2026-02-05 01:13:37.062654 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-05 01:13:37.062660 | orchestrator | Thursday 05 February 2026 01:05:23 +0000 (0:00:00.894) 0:00:02.955 ***** 2026-02-05 01:13:37.062687 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:37.062696 | orchestrator | 2026-02-05 01:13:37.062702 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-05 01:13:37.062709 | orchestrator | Thursday 05 February 2026 01:05:23 +0000 (0:00:00.671) 0:00:03.626 ***** 2026-02-05 01:13:37.062716 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-05 01:13:37.062724 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-05 01:13:37.062730 | orchestrator | 2026-02-05 01:13:37.062737 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-05 01:13:37.062753 | orchestrator | Thursday 05 February 2026 01:05:28 +0000 (0:00:04.748) 0:00:08.375 ***** 2026-02-05 01:13:37.062760 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 01:13:37.062767 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 01:13:37.062774 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.062780 | orchestrator | 2026-02-05 01:13:37.062786 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-05 01:13:37.062793 | orchestrator | Thursday 05 February 2026 01:05:32 +0000 (0:00:04.392) 0:00:12.767 ***** 2026-02-05 01:13:37.062799 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.062805 | orchestrator | 2026-02-05 01:13:37.062812 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-05 01:13:37.062818 | orchestrator | Thursday 05 February 2026 01:05:33 +0000 (0:00:00.660) 0:00:13.428 ***** 2026-02-05 01:13:37.062825 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.062832 | orchestrator | 2026-02-05 01:13:37.062839 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-05 01:13:37.062845 | orchestrator | Thursday 05 February 2026 01:05:34 +0000 (0:00:01.394) 0:00:14.822 ***** 2026-02-05 01:13:37.062851 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.062858 | orchestrator | 2026-02-05 01:13:37.062865 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-05 01:13:37.062873 | orchestrator | Thursday 05 February 2026 01:05:37 +0000 (0:00:02.253) 0:00:17.076 ***** 2026-02-05 01:13:37.062880 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.062887 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.062894 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.062902 | orchestrator | 2026-02-05 01:13:37.062909 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-05 01:13:37.062916 | orchestrator | Thursday 05 February 2026 01:05:37 +0000 (0:00:00.266) 0:00:17.342 ***** 2026-02-05 01:13:37.062924 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:37.062932 | orchestrator | 2026-02-05 01:13:37.062938 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-05 01:13:37.062954 | orchestrator | Thursday 05 February 2026 01:06:10 +0000 (0:00:32.643) 0:00:49.986 ***** 2026-02-05 01:13:37.062960 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.062967 | orchestrator | 2026-02-05 01:13:37.062974 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-05 01:13:37.063027 | orchestrator | Thursday 05 February 2026 01:06:25 +0000 (0:00:15.548) 0:01:05.535 ***** 2026-02-05 01:13:37.063035 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:37.063041 | orchestrator | 2026-02-05 01:13:37.063047 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-05 01:13:37.063053 | orchestrator | Thursday 05 February 2026 01:06:40 +0000 (0:00:15.293) 0:01:20.828 ***** 2026-02-05 01:13:37.063060 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:37.063066 | orchestrator | 2026-02-05 01:13:37.063072 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-05 01:13:37.063079 | orchestrator | Thursday 05 February 2026 01:06:41 +0000 (0:00:01.011) 0:01:21.840 ***** 2026-02-05 01:13:37.063084 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.063090 | orchestrator | 2026-02-05 01:13:37.063098 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-05 01:13:37.063104 | orchestrator | Thursday 05 February 2026 01:06:42 +0000 (0:00:00.456) 0:01:22.296 ***** 2026-02-05 01:13:37.063111 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:37.063117 | orchestrator | 2026-02-05 01:13:37.063122 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-05 01:13:37.063128 | orchestrator | Thursday 05 February 2026 01:06:42 +0000 (0:00:00.511) 0:01:22.808 ***** 2026-02-05 01:13:37.063133 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:37.063147 | orchestrator | 2026-02-05 01:13:37.063153 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-05 01:13:37.063159 | orchestrator | Thursday 05 February 2026 01:06:59 +0000 (0:00:16.861) 0:01:39.670 ***** 2026-02-05 01:13:37.063165 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.063170 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063176 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063185 | orchestrator | 2026-02-05 01:13:37.063190 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-05 01:13:37.063196 | orchestrator | 2026-02-05 01:13:37.063201 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-05 01:13:37.063207 | orchestrator | Thursday 05 February 2026 01:07:00 +0000 (0:00:00.334) 0:01:40.004 ***** 2026-02-05 01:13:37.063213 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:37.063219 | orchestrator | 2026-02-05 01:13:37.063224 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-05 01:13:37.063230 | orchestrator | Thursday 05 February 2026 01:07:00 +0000 (0:00:00.508) 0:01:40.513 ***** 2026-02-05 01:13:37.063235 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063241 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063247 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.063253 | orchestrator | 2026-02-05 01:13:37.063259 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-05 01:13:37.063265 | orchestrator | Thursday 05 February 2026 01:07:02 +0000 (0:00:01.835) 0:01:42.349 ***** 2026-02-05 01:13:37.063271 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063277 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063294 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.063300 | orchestrator | 2026-02-05 01:13:37.063305 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-05 01:13:37.063311 | orchestrator | Thursday 05 February 2026 01:07:04 +0000 (0:00:02.096) 0:01:44.445 ***** 2026-02-05 01:13:37.063317 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.063390 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063397 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063403 | orchestrator | 2026-02-05 01:13:37.063409 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-05 01:13:37.063415 | orchestrator | Thursday 05 February 2026 01:07:04 +0000 (0:00:00.282) 0:01:44.727 ***** 2026-02-05 01:13:37.063421 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 01:13:37.063427 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063433 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 01:13:37.063449 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063456 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-05 01:13:37.063463 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-05 01:13:37.063470 | orchestrator | 2026-02-05 01:13:37.063500 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-05 01:13:37.063507 | orchestrator | Thursday 05 February 2026 01:07:11 +0000 (0:00:06.385) 0:01:51.113 ***** 2026-02-05 01:13:37.063515 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.063521 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063529 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063535 | orchestrator | 2026-02-05 01:13:37.063542 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-05 01:13:37.063548 | orchestrator | Thursday 05 February 2026 01:07:11 +0000 (0:00:00.306) 0:01:51.419 ***** 2026-02-05 01:13:37.063555 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-05 01:13:37.063576 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.063585 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 01:13:37.063591 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063597 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 01:13:37.063611 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063618 | orchestrator | 2026-02-05 01:13:37.063625 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-05 01:13:37.063630 | orchestrator | Thursday 05 February 2026 01:07:12 +0000 (0:00:00.573) 0:01:51.993 ***** 2026-02-05 01:13:37.063636 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063642 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063648 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.063654 | orchestrator | 2026-02-05 01:13:37.063668 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-05 01:13:37.063674 | orchestrator | Thursday 05 February 2026 01:07:12 +0000 (0:00:00.404) 0:01:52.398 ***** 2026-02-05 01:13:37.063681 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063687 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063694 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.063700 | orchestrator | 2026-02-05 01:13:37.063706 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-05 01:13:37.063712 | orchestrator | Thursday 05 February 2026 01:07:13 +0000 (0:00:00.789) 0:01:53.187 ***** 2026-02-05 01:13:37.063720 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063726 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063733 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.063740 | orchestrator | 2026-02-05 01:13:37.063804 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-05 01:13:37.063811 | orchestrator | Thursday 05 February 2026 01:07:15 +0000 (0:00:01.872) 0:01:55.060 ***** 2026-02-05 01:13:37.063817 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063823 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063828 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:37.063834 | orchestrator | 2026-02-05 01:13:37.063840 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-05 01:13:37.063846 | orchestrator | Thursday 05 February 2026 01:07:41 +0000 (0:00:26.564) 0:02:21.624 ***** 2026-02-05 01:13:37.063852 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063859 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063865 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:37.063871 | orchestrator | 2026-02-05 01:13:37.063877 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-05 01:13:37.063883 | orchestrator | Thursday 05 February 2026 01:07:53 +0000 (0:00:11.372) 0:02:32.997 ***** 2026-02-05 01:13:37.063889 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:37.063895 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063900 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063907 | orchestrator | 2026-02-05 01:13:37.063913 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-05 01:13:37.063919 | orchestrator | Thursday 05 February 2026 01:07:54 +0000 (0:00:01.114) 0:02:34.111 ***** 2026-02-05 01:13:37.063925 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063931 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.063937 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.063944 | orchestrator | 2026-02-05 01:13:37.063950 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-05 01:13:37.063955 | orchestrator | Thursday 05 February 2026 01:08:06 +0000 (0:00:12.197) 0:02:46.308 ***** 2026-02-05 01:13:37.063961 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.063968 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.063974 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.064001 | orchestrator | 2026-02-05 01:13:37.064008 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-05 01:13:37.064014 | orchestrator | Thursday 05 February 2026 01:08:07 +0000 (0:00:00.996) 0:02:47.305 ***** 2026-02-05 01:13:37.064020 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.064025 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.064031 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.064045 | orchestrator | 2026-02-05 01:13:37.064051 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-05 01:13:37.064057 | orchestrator | 2026-02-05 01:13:37.064072 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-05 01:13:37.064078 | orchestrator | Thursday 05 February 2026 01:08:07 +0000 (0:00:00.436) 0:02:47.741 ***** 2026-02-05 01:13:37.064083 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:37.064091 | orchestrator | 2026-02-05 01:13:37.064097 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-05 01:13:37.064102 | orchestrator | Thursday 05 February 2026 01:08:08 +0000 (0:00:00.495) 0:02:48.236 ***** 2026-02-05 01:13:37.064108 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-05 01:13:37.064115 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-05 01:13:37.064121 | orchestrator | 2026-02-05 01:13:37.064127 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-05 01:13:37.064133 | orchestrator | Thursday 05 February 2026 01:08:12 +0000 (0:00:03.812) 0:02:52.049 ***** 2026-02-05 01:13:37.064141 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-05 01:13:37.064149 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-05 01:13:37.064155 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-05 01:13:37.064161 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-05 01:13:37.064167 | orchestrator | 2026-02-05 01:13:37.064172 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-05 01:13:37.064179 | orchestrator | Thursday 05 February 2026 01:08:19 +0000 (0:00:06.915) 0:02:58.964 ***** 2026-02-05 01:13:37.064185 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:13:37.064190 | orchestrator | 2026-02-05 01:13:37.064196 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-05 01:13:37.064202 | orchestrator | Thursday 05 February 2026 01:08:22 +0000 (0:00:03.206) 0:03:02.170 ***** 2026-02-05 01:13:37.064208 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:13:37.064214 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-05 01:13:37.064220 | orchestrator | 2026-02-05 01:13:37.064226 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-05 01:13:37.064240 | orchestrator | Thursday 05 February 2026 01:08:26 +0000 (0:00:04.069) 0:03:06.240 ***** 2026-02-05 01:13:37.064247 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:13:37.064252 | orchestrator | 2026-02-05 01:13:37.064258 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-05 01:13:37.064264 | orchestrator | Thursday 05 February 2026 01:08:29 +0000 (0:00:03.030) 0:03:09.271 ***** 2026-02-05 01:13:37.064270 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-05 01:13:37.064276 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-05 01:13:37.064283 | orchestrator | 2026-02-05 01:13:37.064289 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-05 01:13:37.064295 | orchestrator | Thursday 05 February 2026 01:08:37 +0000 (0:00:08.214) 0:03:17.486 ***** 2026-02-05 01:13:37.064307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.064331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.064343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.064351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.064359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.064372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.064378 | orchestrator | 2026-02-05 01:13:37.064385 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-05 01:13:37.064392 | orchestrator | Thursday 05 February 2026 01:08:38 +0000 (0:00:01.245) 0:03:18.732 ***** 2026-02-05 01:13:37.064398 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.064404 | orchestrator | 2026-02-05 01:13:37.064410 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-05 01:13:37.064419 | orchestrator | Thursday 05 February 2026 01:08:38 +0000 (0:00:00.119) 0:03:18.851 ***** 2026-02-05 01:13:37.064425 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.064431 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.064437 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.064443 | orchestrator | 2026-02-05 01:13:37.064449 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-05 01:13:37.064455 | orchestrator | Thursday 05 February 2026 01:08:39 +0000 (0:00:00.250) 0:03:19.101 ***** 2026-02-05 01:13:37.064461 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:13:37.064466 | orchestrator | 2026-02-05 01:13:37.064474 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-05 01:13:37.064479 | orchestrator | Thursday 05 February 2026 01:08:39 +0000 (0:00:00.623) 0:03:19.725 ***** 2026-02-05 01:13:37.064485 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.064491 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.064497 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.064503 | orchestrator | 2026-02-05 01:13:37.064509 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-05 01:13:37.064514 | orchestrator | Thursday 05 February 2026 01:08:40 +0000 (0:00:00.388) 0:03:20.114 ***** 2026-02-05 01:13:37.064520 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:37.064526 | orchestrator | 2026-02-05 01:13:37.064532 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-05 01:13:37.064537 | orchestrator | Thursday 05 February 2026 01:08:40 +0000 (0:00:00.505) 0:03:20.619 ***** 2026-02-05 01:13:37.064548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.064560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.064572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.064578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.064594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.064607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.064612 | orchestrator | 2026-02-05 01:13:37.064618 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-05 01:13:37.064624 | orchestrator | Thursday 05 February 2026 01:08:43 +0000 (0:00:02.307) 0:03:22.927 ***** 2026-02-05 01:13:37.064630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:37.064643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.064650 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.064657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:37.064673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.064679 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.064686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:37.065353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.065386 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.065392 | orchestrator | 2026-02-05 01:13:37.065397 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-05 01:13:37.065401 | orchestrator | Thursday 05 February 2026 01:08:43 +0000 (0:00:00.712) 0:03:23.639 ***** 2026-02-05 01:13:37.065407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:37.065427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.065431 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.065435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:37.065440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.065444 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.065455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:37.065467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.065471 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.065475 | orchestrator | 2026-02-05 01:13:37.065479 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-05 01:13:37.065483 | orchestrator | Thursday 05 February 2026 01:08:44 +0000 (0:00:00.726) 0:03:24.365 ***** 2026-02-05 01:13:37.065487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.065494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.065499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.065509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.065514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.065518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.065522 | orchestrator | 2026-02-05 01:13:37.065526 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-05 01:13:37.065530 | orchestrator | Thursday 05 February 2026 01:08:46 +0000 (0:00:02.326) 0:03:26.691 ***** 2026-02-05 01:13:37.065539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.065552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.065559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.065566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.065577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.065584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.065594 | orchestrator | 2026-02-05 01:13:37.065598 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-05 01:13:37.065602 | orchestrator | Thursday 05 February 2026 01:08:51 +0000 (0:00:05.148) 0:03:31.840 ***** 2026-02-05 01:13:37.065673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:37.065679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.065683 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.065691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:37.065695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.065703 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.065712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:37.065717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.065721 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.065725 | orchestrator | 2026-02-05 01:13:37.065728 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-05 01:13:37.065732 | orchestrator | Thursday 05 February 2026 01:08:52 +0000 (0:00:00.511) 0:03:32.352 ***** 2026-02-05 01:13:37.065736 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:37.065740 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.065744 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:37.065747 | orchestrator | 2026-02-05 01:13:37.065751 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-05 01:13:37.065755 | orchestrator | Thursday 05 February 2026 01:08:53 +0000 (0:00:01.479) 0:03:33.831 ***** 2026-02-05 01:13:37.065774 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.065779 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.065785 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.065792 | orchestrator | 2026-02-05 01:13:37.065796 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-05 01:13:37.065799 | orchestrator | Thursday 05 February 2026 01:08:54 +0000 (0:00:00.327) 0:03:34.158 ***** 2026-02-05 01:13:37.065808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.065820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.065824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:37.065828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.065843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.065850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.065856 | orchestrator | 2026-02-05 01:13:37.065862 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-05 01:13:37.065867 | orchestrator | Thursday 05 February 2026 01:08:56 +0000 (0:00:01.933) 0:03:36.092 ***** 2026-02-05 01:13:37.065873 | orchestrator | 2026-02-05 01:13:37.065878 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-05 01:13:37.065884 | orchestrator | Thursday 05 February 2026 01:08:56 +0000 (0:00:00.120) 0:03:36.213 ***** 2026-02-05 01:13:37.065890 | orchestrator | 2026-02-05 01:13:37.065895 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-05 01:13:37.065901 | orchestrator | Thursday 05 February 2026 01:08:56 +0000 (0:00:00.119) 0:03:36.332 ***** 2026-02-05 01:13:37.065907 | orchestrator | 2026-02-05 01:13:37.065912 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-05 01:13:37.065918 | orchestrator | Thursday 05 February 2026 01:08:56 +0000 (0:00:00.146) 0:03:36.478 ***** 2026-02-05 01:13:37.065923 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.065929 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:37.065935 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:37.065940 | orchestrator | 2026-02-05 01:13:37.065947 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-05 01:13:37.065953 | orchestrator | Thursday 05 February 2026 01:09:19 +0000 (0:00:23.200) 0:03:59.679 ***** 2026-02-05 01:13:37.065959 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:37.065966 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.065972 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:37.065977 | orchestrator | 2026-02-05 01:13:37.066009 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-05 01:13:37.066057 | orchestrator | 2026-02-05 01:13:37.066064 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 01:13:37.066072 | orchestrator | Thursday 05 February 2026 01:09:30 +0000 (0:00:10.637) 0:04:10.316 ***** 2026-02-05 01:13:37.066079 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:37.066087 | orchestrator | 2026-02-05 01:13:37.066093 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 01:13:37.066099 | orchestrator | Thursday 05 February 2026 01:09:31 +0000 (0:00:01.145) 0:04:11.461 ***** 2026-02-05 01:13:37.066105 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.066112 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.066118 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.066125 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.066131 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.066136 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.066150 | orchestrator | 2026-02-05 01:13:37.066156 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-05 01:13:37.066163 | orchestrator | Thursday 05 February 2026 01:09:32 +0000 (0:00:00.582) 0:04:12.044 ***** 2026-02-05 01:13:37.066170 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.066176 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.066183 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.066190 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:13:37.066195 | orchestrator | 2026-02-05 01:13:37.066199 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-05 01:13:37.066202 | orchestrator | Thursday 05 February 2026 01:09:33 +0000 (0:00:00.989) 0:04:13.034 ***** 2026-02-05 01:13:37.066207 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-05 01:13:37.066211 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-05 01:13:37.066214 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-05 01:13:37.066218 | orchestrator | 2026-02-05 01:13:37.066222 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-05 01:13:37.066226 | orchestrator | Thursday 05 February 2026 01:09:33 +0000 (0:00:00.601) 0:04:13.636 ***** 2026-02-05 01:13:37.066229 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-05 01:13:37.066233 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-05 01:13:37.066237 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-05 01:13:37.066240 | orchestrator | 2026-02-05 01:13:37.066244 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-05 01:13:37.066248 | orchestrator | Thursday 05 February 2026 01:09:34 +0000 (0:00:01.110) 0:04:14.746 ***** 2026-02-05 01:13:37.066251 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-05 01:13:37.066255 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.066259 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-05 01:13:37.066263 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.066266 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-05 01:13:37.066270 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.066275 | orchestrator | 2026-02-05 01:13:37.066288 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-05 01:13:37.066294 | orchestrator | Thursday 05 February 2026 01:09:35 +0000 (0:00:00.727) 0:04:15.474 ***** 2026-02-05 01:13:37.066300 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 01:13:37.066306 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 01:13:37.066312 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.066319 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 01:13:37.066325 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 01:13:37.066331 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.066337 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-05 01:13:37.066343 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 01:13:37.066350 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 01:13:37.066357 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.066361 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-05 01:13:37.066365 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-05 01:13:37.066369 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-05 01:13:37.066374 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-05 01:13:37.066380 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-05 01:13:37.066392 | orchestrator | 2026-02-05 01:13:37.066398 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-05 01:13:37.066404 | orchestrator | Thursday 05 February 2026 01:09:36 +0000 (0:00:00.951) 0:04:16.426 ***** 2026-02-05 01:13:37.066410 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.066417 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.066422 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:37.066428 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.066434 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:37.066444 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:37.066450 | orchestrator | 2026-02-05 01:13:37.066456 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-05 01:13:37.066468 | orchestrator | Thursday 05 February 2026 01:09:37 +0000 (0:00:01.250) 0:04:17.677 ***** 2026-02-05 01:13:37.066472 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.066476 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.066479 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.066483 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:37.066487 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:37.066491 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:37.066494 | orchestrator | 2026-02-05 01:13:37.066498 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-05 01:13:37.066502 | orchestrator | Thursday 05 February 2026 01:09:39 +0000 (0:00:01.572) 0:04:19.249 ***** 2026-02-05 01:13:37.066507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066548 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066600 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066616 | orchestrator | 2026-02-05 01:13:37.066620 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 01:13:37.066624 | orchestrator | Thursday 05 February 2026 01:09:41 +0000 (0:00:02.373) 0:04:21.623 ***** 2026-02-05 01:13:37.066630 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:37.066638 | orchestrator | 2026-02-05 01:13:37.066643 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-05 01:13:37.066649 | orchestrator | Thursday 05 February 2026 01:09:42 +0000 (0:00:01.220) 0:04:22.843 ***** 2026-02-05 01:13:37.066656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066691 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066725 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.066752 | orchestrator | 2026-02-05 01:13:37.066756 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-05 01:13:37.066760 | orchestrator | Thursday 05 February 2026 01:09:46 +0000 (0:00:03.335) 0:04:26.178 ***** 2026-02-05 01:13:37.066785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:37.066793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:37.066797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.066801 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.066810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:37.066817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:37.066824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.066835 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.066847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:37.066851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:37.066858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.066862 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.066866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:37.066870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.066877 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.066882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:37.066889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2026-02-05 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:37.066895 | orchestrator | nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.066899 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.066903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:37.066907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.066911 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.066915 | orchestrator | 2026-02-05 01:13:37.066924 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-05 01:13:37.066930 | orchestrator | Thursday 05 February 2026 01:09:48 +0000 (0:00:01.990) 0:04:28.169 ***** 2026-02-05 01:13:37.066936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:37.066948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:37.066959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.066967 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.066973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:37.067002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:37.067012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.067016 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.067020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:37.067028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:37.067036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.067040 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.067044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:37.067051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.067055 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.067059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:37.067067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.067071 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.067074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:37.067083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.067087 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.067091 | orchestrator | 2026-02-05 01:13:37.067095 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 01:13:37.067099 | orchestrator | Thursday 05 February 2026 01:09:50 +0000 (0:00:02.112) 0:04:30.282 ***** 2026-02-05 01:13:37.067103 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.067106 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.067110 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.067114 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:13:37.067118 | orchestrator | 2026-02-05 01:13:37.067122 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-05 01:13:37.067126 | orchestrator | Thursday 05 February 2026 01:09:51 +0000 (0:00:00.845) 0:04:31.128 ***** 2026-02-05 01:13:37.067130 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 01:13:37.067134 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 01:13:37.067138 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 01:13:37.067142 | orchestrator | 2026-02-05 01:13:37.067145 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-05 01:13:37.067149 | orchestrator | Thursday 05 February 2026 01:09:52 +0000 (0:00:01.094) 0:04:32.222 ***** 2026-02-05 01:13:37.067153 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 01:13:37.067157 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 01:13:37.067160 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 01:13:37.067164 | orchestrator | 2026-02-05 01:13:37.067168 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-05 01:13:37.067172 | orchestrator | Thursday 05 February 2026 01:09:53 +0000 (0:00:00.950) 0:04:33.173 ***** 2026-02-05 01:13:37.067178 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:13:37.067192 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:13:37.067198 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:13:37.067204 | orchestrator | 2026-02-05 01:13:37.067210 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-05 01:13:37.067216 | orchestrator | Thursday 05 February 2026 01:09:53 +0000 (0:00:00.461) 0:04:33.635 ***** 2026-02-05 01:13:37.067227 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:13:37.067232 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:13:37.067238 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:13:37.067244 | orchestrator | 2026-02-05 01:13:37.067250 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-05 01:13:37.067256 | orchestrator | Thursday 05 February 2026 01:09:54 +0000 (0:00:00.769) 0:04:34.405 ***** 2026-02-05 01:13:37.067262 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-05 01:13:37.067268 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-05 01:13:37.067275 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-05 01:13:37.067281 | orchestrator | 2026-02-05 01:13:37.067288 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-05 01:13:37.067294 | orchestrator | Thursday 05 February 2026 01:09:55 +0000 (0:00:01.230) 0:04:35.636 ***** 2026-02-05 01:13:37.067300 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-05 01:13:37.067306 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-05 01:13:37.067311 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-05 01:13:37.067316 | orchestrator | 2026-02-05 01:13:37.067322 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-05 01:13:37.067327 | orchestrator | Thursday 05 February 2026 01:09:56 +0000 (0:00:01.005) 0:04:36.641 ***** 2026-02-05 01:13:37.067333 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-05 01:13:37.067339 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-05 01:13:37.067345 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-05 01:13:37.067351 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-05 01:13:37.067357 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-05 01:13:37.067363 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-05 01:13:37.067370 | orchestrator | 2026-02-05 01:13:37.067376 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-05 01:13:37.067382 | orchestrator | Thursday 05 February 2026 01:10:00 +0000 (0:00:03.918) 0:04:40.559 ***** 2026-02-05 01:13:37.067388 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.067396 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.067402 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.067408 | orchestrator | 2026-02-05 01:13:37.067415 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-05 01:13:37.067421 | orchestrator | Thursday 05 February 2026 01:10:00 +0000 (0:00:00.306) 0:04:40.866 ***** 2026-02-05 01:13:37.067427 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.067434 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.067440 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.067446 | orchestrator | 2026-02-05 01:13:37.067451 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-05 01:13:37.067456 | orchestrator | Thursday 05 February 2026 01:10:01 +0000 (0:00:00.516) 0:04:41.382 ***** 2026-02-05 01:13:37.067461 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:37.067466 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:37.067472 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:37.067477 | orchestrator | 2026-02-05 01:13:37.067484 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-05 01:13:37.067490 | orchestrator | Thursday 05 February 2026 01:10:02 +0000 (0:00:01.243) 0:04:42.625 ***** 2026-02-05 01:13:37.067501 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-05 01:13:37.067514 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-05 01:13:37.067520 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-05 01:13:37.067526 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-05 01:13:37.067532 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-05 01:13:37.067538 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-05 01:13:37.067544 | orchestrator | 2026-02-05 01:13:37.067551 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-05 01:13:37.067557 | orchestrator | Thursday 05 February 2026 01:10:06 +0000 (0:00:03.283) 0:04:45.909 ***** 2026-02-05 01:13:37.067564 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 01:13:37.067570 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 01:13:37.067577 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 01:13:37.067582 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 01:13:37.067588 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:37.067594 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 01:13:37.067600 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:37.067604 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 01:13:37.067608 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:37.067611 | orchestrator | 2026-02-05 01:13:37.067615 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-05 01:13:37.067619 | orchestrator | Thursday 05 February 2026 01:10:09 +0000 (0:00:03.423) 0:04:49.332 ***** 2026-02-05 01:13:37.067623 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.067627 | orchestrator | 2026-02-05 01:13:37.067631 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-05 01:13:37.067638 | orchestrator | Thursday 05 February 2026 01:10:09 +0000 (0:00:00.250) 0:04:49.583 ***** 2026-02-05 01:13:37.067642 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.067646 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.067650 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.067654 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.067657 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.067661 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.067665 | orchestrator | 2026-02-05 01:13:37.067669 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-05 01:13:37.067672 | orchestrator | Thursday 05 February 2026 01:10:10 +0000 (0:00:00.596) 0:04:50.180 ***** 2026-02-05 01:13:37.067676 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 01:13:37.067680 | orchestrator | 2026-02-05 01:13:37.067683 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-05 01:13:37.067687 | orchestrator | Thursday 05 February 2026 01:10:10 +0000 (0:00:00.596) 0:04:50.776 ***** 2026-02-05 01:13:37.067691 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.067695 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.067698 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.067702 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.067706 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.067709 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.067713 | orchestrator | 2026-02-05 01:13:37.067719 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-05 01:13:37.067725 | orchestrator | Thursday 05 February 2026 01:10:11 +0000 (0:00:00.623) 0:04:51.400 ***** 2026-02-05 01:13:37.067732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067780 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067792 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067815 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067830 | orchestrator | 2026-02-05 01:13:37.067834 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-05 01:13:37.067838 | orchestrator | Thursday 05 February 2026 01:10:14 +0000 (0:00:03.278) 0:04:54.678 ***** 2026-02-05 01:13:37.067845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:37.067849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:37.067857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:37.067861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:37.067916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:37.067921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:37.067928 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.067976 | orchestrator | 2026-02-05 01:13:37.068002 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-05 01:13:37.068007 | orchestrator | Thursday 05 February 2026 01:10:21 +0000 (0:00:06.817) 0:05:01.496 ***** 2026-02-05 01:13:37.068010 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.068015 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.068018 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.068022 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.068026 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.068030 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.068033 | orchestrator | 2026-02-05 01:13:37.068037 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-05 01:13:37.068043 | orchestrator | Thursday 05 February 2026 01:10:22 +0000 (0:00:01.310) 0:05:02.806 ***** 2026-02-05 01:13:37.068048 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-05 01:13:37.068055 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-05 01:13:37.068077 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-05 01:13:37.068089 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-05 01:13:37.068100 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-05 01:13:37.068107 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-05 01:13:37.068113 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-05 01:13:37.068120 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.068126 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-05 01:13:37.068132 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.068138 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-05 01:13:37.068144 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.068150 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-05 01:13:37.068155 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-05 01:13:37.068160 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-05 01:13:37.068173 | orchestrator | 2026-02-05 01:13:37.068179 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-05 01:13:37.068186 | orchestrator | Thursday 05 February 2026 01:10:26 +0000 (0:00:03.849) 0:05:06.656 ***** 2026-02-05 01:13:37.068192 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.068198 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.068204 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.068210 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.068216 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.068222 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.068228 | orchestrator | 2026-02-05 01:13:37.068234 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-05 01:13:37.068240 | orchestrator | Thursday 05 February 2026 01:10:27 +0000 (0:00:00.635) 0:05:07.291 ***** 2026-02-05 01:13:37.068254 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-05 01:13:37.068260 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-05 01:13:37.068264 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-05 01:13:37.068267 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-05 01:13:37.068271 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-05 01:13:37.068275 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-05 01:13:37.068279 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-05 01:13:37.068282 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-05 01:13:37.068286 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-05 01:13:37.068290 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.068294 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-05 01:13:37.068298 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-05 01:13:37.068302 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.068308 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-05 01:13:37.068314 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.068319 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-05 01:13:37.068324 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-05 01:13:37.068330 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-05 01:13:37.068335 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-05 01:13:37.068340 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-05 01:13:37.068345 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-05 01:13:37.068350 | orchestrator | 2026-02-05 01:13:37.068355 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-05 01:13:37.068360 | orchestrator | Thursday 05 February 2026 01:10:32 +0000 (0:00:05.523) 0:05:12.815 ***** 2026-02-05 01:13:37.068370 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 01:13:37.068387 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 01:13:37.068393 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 01:13:37.068399 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-05 01:13:37.068405 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-05 01:13:37.068411 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-05 01:13:37.068416 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 01:13:37.068422 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 01:13:37.068428 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 01:13:37.068434 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 01:13:37.068439 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 01:13:37.068444 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 01:13:37.068449 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-05 01:13:37.068455 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.068460 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-05 01:13:37.068466 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.068471 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-05 01:13:37.068477 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.068482 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 01:13:37.068488 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 01:13:37.068494 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 01:13:37.068504 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 01:13:37.068511 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 01:13:37.068517 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 01:13:37.068523 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 01:13:37.068529 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 01:13:37.068535 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 01:13:37.068541 | orchestrator | 2026-02-05 01:13:37.068547 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-05 01:13:37.068553 | orchestrator | Thursday 05 February 2026 01:10:39 +0000 (0:00:06.585) 0:05:19.400 ***** 2026-02-05 01:13:37.068560 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.068566 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.068573 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.068579 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.068587 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.068594 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.068601 | orchestrator | 2026-02-05 01:13:37.068607 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-05 01:13:37.068613 | orchestrator | Thursday 05 February 2026 01:10:40 +0000 (0:00:00.532) 0:05:19.932 ***** 2026-02-05 01:13:37.068618 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.068624 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.068636 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.068641 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.068647 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.068653 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.068658 | orchestrator | 2026-02-05 01:13:37.068665 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-05 01:13:37.068671 | orchestrator | Thursday 05 February 2026 01:10:40 +0000 (0:00:00.730) 0:05:20.663 ***** 2026-02-05 01:13:37.068678 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.068684 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.068689 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.068695 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:37.068701 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:37.068707 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:37.068714 | orchestrator | 2026-02-05 01:13:37.068720 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-05 01:13:37.068726 | orchestrator | Thursday 05 February 2026 01:10:42 +0000 (0:00:01.600) 0:05:22.264 ***** 2026-02-05 01:13:37.068740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:37.068749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:37.068760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.068769 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.068774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:37.068784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:37.068788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.068795 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.068800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:37.068804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:37.068811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.068820 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.068826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:37.068830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.068835 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.068841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:37.068846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:37.068850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.068858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:37.068869 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.068876 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.068882 | orchestrator | 2026-02-05 01:13:37.068888 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-05 01:13:37.068894 | orchestrator | Thursday 05 February 2026 01:10:43 +0000 (0:00:01.235) 0:05:23.499 ***** 2026-02-05 01:13:37.068901 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-05 01:13:37.068909 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-05 01:13:37.068916 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.068922 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-05 01:13:37.068929 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-05 01:13:37.068935 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.068943 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-05 01:13:37.068949 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-05 01:13:37.068953 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.068957 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-05 01:13:37.068961 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-05 01:13:37.068965 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.068969 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-05 01:13:37.068973 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-05 01:13:37.068978 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.069022 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-05 01:13:37.069028 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-05 01:13:37.069035 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.069042 | orchestrator | 2026-02-05 01:13:37.069048 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-05 01:13:37.069054 | orchestrator | Thursday 05 February 2026 01:10:44 +0000 (0:00:00.546) 0:05:24.045 ***** 2026-02-05 01:13:37.069068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069076 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069129 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069136 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069148 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069178 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069201 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:37.069207 | orchestrator | 2026-02-05 01:13:37.069213 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 01:13:37.069218 | orchestrator | Thursday 05 February 2026 01:10:47 +0000 (0:00:02.929) 0:05:26.975 ***** 2026-02-05 01:13:37.069231 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.069237 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.069243 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.069249 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.069255 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.069261 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.069266 | orchestrator | 2026-02-05 01:13:37.069272 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 01:13:37.069278 | orchestrator | Thursday 05 February 2026 01:10:47 +0000 (0:00:00.563) 0:05:27.538 ***** 2026-02-05 01:13:37.069284 | orchestrator | 2026-02-05 01:13:37.069289 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 01:13:37.069295 | orchestrator | Thursday 05 February 2026 01:10:47 +0000 (0:00:00.306) 0:05:27.845 ***** 2026-02-05 01:13:37.069301 | orchestrator | 2026-02-05 01:13:37.069307 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 01:13:37.069313 | orchestrator | Thursday 05 February 2026 01:10:48 +0000 (0:00:00.133) 0:05:27.978 ***** 2026-02-05 01:13:37.069319 | orchestrator | 2026-02-05 01:13:37.069325 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 01:13:37.069331 | orchestrator | Thursday 05 February 2026 01:10:48 +0000 (0:00:00.126) 0:05:28.104 ***** 2026-02-05 01:13:37.069337 | orchestrator | 2026-02-05 01:13:37.069342 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 01:13:37.069347 | orchestrator | Thursday 05 February 2026 01:10:48 +0000 (0:00:00.126) 0:05:28.230 ***** 2026-02-05 01:13:37.069352 | orchestrator | 2026-02-05 01:13:37.069357 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 01:13:37.069363 | orchestrator | Thursday 05 February 2026 01:10:48 +0000 (0:00:00.122) 0:05:28.353 ***** 2026-02-05 01:13:37.069369 | orchestrator | 2026-02-05 01:13:37.069375 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-05 01:13:37.069381 | orchestrator | Thursday 05 February 2026 01:10:48 +0000 (0:00:00.129) 0:05:28.482 ***** 2026-02-05 01:13:37.069387 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.069393 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:37.069399 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:37.069404 | orchestrator | 2026-02-05 01:13:37.069410 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-05 01:13:37.069415 | orchestrator | Thursday 05 February 2026 01:11:00 +0000 (0:00:12.216) 0:05:40.699 ***** 2026-02-05 01:13:37.069421 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.069428 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:37.069434 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:37.069440 | orchestrator | 2026-02-05 01:13:37.069445 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-05 01:13:37.069452 | orchestrator | Thursday 05 February 2026 01:11:15 +0000 (0:00:14.536) 0:05:55.235 ***** 2026-02-05 01:13:37.069463 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:37.069469 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:37.069476 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:37.069483 | orchestrator | 2026-02-05 01:13:37.069489 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-05 01:13:37.069494 | orchestrator | Thursday 05 February 2026 01:11:35 +0000 (0:00:19.827) 0:06:15.063 ***** 2026-02-05 01:13:37.069500 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:37.069506 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:37.069511 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:37.069518 | orchestrator | 2026-02-05 01:13:37.069525 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-05 01:13:37.069609 | orchestrator | Thursday 05 February 2026 01:12:05 +0000 (0:00:30.379) 0:06:45.442 ***** 2026-02-05 01:13:37.069622 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:37.069630 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:37.069635 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:37.069639 | orchestrator | 2026-02-05 01:13:37.069643 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-05 01:13:37.069647 | orchestrator | Thursday 05 February 2026 01:12:06 +0000 (0:00:00.799) 0:06:46.241 ***** 2026-02-05 01:13:37.069651 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:37.069655 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:37.069659 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:37.069663 | orchestrator | 2026-02-05 01:13:37.069667 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-05 01:13:37.069673 | orchestrator | Thursday 05 February 2026 01:12:07 +0000 (0:00:00.670) 0:06:46.912 ***** 2026-02-05 01:13:37.069679 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:37.069685 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:37.069690 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:37.069696 | orchestrator | 2026-02-05 01:13:37.069702 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-05 01:13:37.069708 | orchestrator | Thursday 05 February 2026 01:12:25 +0000 (0:00:18.792) 0:07:05.705 ***** 2026-02-05 01:13:37.069716 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.069722 | orchestrator | 2026-02-05 01:13:37.069727 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-05 01:13:37.069733 | orchestrator | Thursday 05 February 2026 01:12:25 +0000 (0:00:00.114) 0:07:05.820 ***** 2026-02-05 01:13:37.069738 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.069744 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.069750 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.069756 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.069761 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.069767 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-05 01:13:37.069774 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 01:13:37.069781 | orchestrator | 2026-02-05 01:13:37.069787 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-05 01:13:37.069793 | orchestrator | Thursday 05 February 2026 01:12:48 +0000 (0:00:22.921) 0:07:28.741 ***** 2026-02-05 01:13:37.069800 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.069813 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.069819 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.069825 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.069831 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.069837 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.069843 | orchestrator | 2026-02-05 01:13:37.069850 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-05 01:13:37.069856 | orchestrator | Thursday 05 February 2026 01:12:57 +0000 (0:00:08.580) 0:07:37.321 ***** 2026-02-05 01:13:37.069870 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.069877 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.069884 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.069888 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.069892 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.069896 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-02-05 01:13:37.069900 | orchestrator | 2026-02-05 01:13:37.069904 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-05 01:13:37.069908 | orchestrator | Thursday 05 February 2026 01:13:01 +0000 (0:00:03.705) 0:07:41.027 ***** 2026-02-05 01:13:37.069912 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 01:13:37.069916 | orchestrator | 2026-02-05 01:13:37.069920 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-05 01:13:37.069923 | orchestrator | Thursday 05 February 2026 01:13:15 +0000 (0:00:14.256) 0:07:55.283 ***** 2026-02-05 01:13:37.069927 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 01:13:37.069931 | orchestrator | 2026-02-05 01:13:37.069935 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-05 01:13:37.069939 | orchestrator | Thursday 05 February 2026 01:13:16 +0000 (0:00:01.174) 0:07:56.457 ***** 2026-02-05 01:13:37.069943 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.069947 | orchestrator | 2026-02-05 01:13:37.069951 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-05 01:13:37.069955 | orchestrator | Thursday 05 February 2026 01:13:17 +0000 (0:00:01.200) 0:07:57.658 ***** 2026-02-05 01:13:37.069958 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 01:13:37.069962 | orchestrator | 2026-02-05 01:13:37.069966 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-05 01:13:37.069970 | orchestrator | Thursday 05 February 2026 01:13:29 +0000 (0:00:11.859) 0:08:09.518 ***** 2026-02-05 01:13:37.069973 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:13:37.069977 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:13:37.070045 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:13:37.070050 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:37.070094 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:13:37.070100 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:13:37.070103 | orchestrator | 2026-02-05 01:13:37.070107 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-05 01:13:37.070111 | orchestrator | 2026-02-05 01:13:37.070115 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-05 01:13:37.070119 | orchestrator | Thursday 05 February 2026 01:13:31 +0000 (0:00:01.805) 0:08:11.323 ***** 2026-02-05 01:13:37.070123 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:37.070127 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:37.070130 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:37.070134 | orchestrator | 2026-02-05 01:13:37.070138 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-05 01:13:37.070142 | orchestrator | 2026-02-05 01:13:37.070146 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-05 01:13:37.070158 | orchestrator | Thursday 05 February 2026 01:13:32 +0000 (0:00:00.996) 0:08:12.320 ***** 2026-02-05 01:13:37.070162 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.070168 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.070174 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.070182 | orchestrator | 2026-02-05 01:13:37.070188 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-05 01:13:37.070195 | orchestrator | 2026-02-05 01:13:37.070202 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-05 01:13:37.070208 | orchestrator | Thursday 05 February 2026 01:13:33 +0000 (0:00:00.777) 0:08:13.097 ***** 2026-02-05 01:13:37.070214 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-05 01:13:37.070228 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-05 01:13:37.070234 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-05 01:13:37.070240 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-05 01:13:37.070247 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-05 01:13:37.070254 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-05 01:13:37.070261 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:37.070267 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-05 01:13:37.070274 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-05 01:13:37.070280 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-05 01:13:37.070287 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-05 01:13:37.070293 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-05 01:13:37.070299 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-05 01:13:37.070305 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:37.070312 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-05 01:13:37.070318 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-05 01:13:37.070324 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-05 01:13:37.070330 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-05 01:13:37.070337 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-05 01:13:37.070343 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-05 01:13:37.070355 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:37.070361 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-05 01:13:37.070366 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-05 01:13:37.070373 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-05 01:13:37.070378 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-05 01:13:37.070381 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-05 01:13:37.070385 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-05 01:13:37.070389 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.070393 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-05 01:13:37.070396 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-05 01:13:37.070400 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-05 01:13:37.070404 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-05 01:13:37.070408 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-05 01:13:37.070411 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-05 01:13:37.070415 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.070419 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-05 01:13:37.070423 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-05 01:13:37.070426 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-05 01:13:37.070430 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-05 01:13:37.070434 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-05 01:13:37.070438 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-05 01:13:37.070442 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.070446 | orchestrator | 2026-02-05 01:13:37.070450 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-05 01:13:37.070453 | orchestrator | 2026-02-05 01:13:37.070457 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-05 01:13:37.070461 | orchestrator | Thursday 05 February 2026 01:13:34 +0000 (0:00:01.400) 0:08:14.498 ***** 2026-02-05 01:13:37.070469 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-05 01:13:37.070473 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-05 01:13:37.070477 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.070481 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-05 01:13:37.070485 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-05 01:13:37.070488 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.070492 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-05 01:13:37.070496 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-05 01:13:37.070499 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.070503 | orchestrator | 2026-02-05 01:13:37.070507 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-05 01:13:37.070511 | orchestrator | 2026-02-05 01:13:37.070514 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-05 01:13:37.070518 | orchestrator | Thursday 05 February 2026 01:13:35 +0000 (0:00:00.540) 0:08:15.038 ***** 2026-02-05 01:13:37.070522 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.070526 | orchestrator | 2026-02-05 01:13:37.070530 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-05 01:13:37.070534 | orchestrator | 2026-02-05 01:13:37.070543 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-05 01:13:37.070547 | orchestrator | Thursday 05 February 2026 01:13:36 +0000 (0:00:01.015) 0:08:16.054 ***** 2026-02-05 01:13:37.070551 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:37.070555 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:37.070558 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:37.070562 | orchestrator | 2026-02-05 01:13:37.070566 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:13:37.070570 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:13:37.070575 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-05 01:13:37.070579 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-05 01:13:37.070584 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-05 01:13:37.070588 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-05 01:13:37.070594 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-05 01:13:37.070600 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-05 01:13:37.070605 | orchestrator | 2026-02-05 01:13:37.070611 | orchestrator | 2026-02-05 01:13:37.070617 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:13:37.070623 | orchestrator | Thursday 05 February 2026 01:13:36 +0000 (0:00:00.423) 0:08:16.478 ***** 2026-02-05 01:13:37.070632 | orchestrator | =============================================================================== 2026-02-05 01:13:37.070638 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.64s 2026-02-05 01:13:37.070643 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.38s 2026-02-05 01:13:37.070649 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 26.56s 2026-02-05 01:13:37.070710 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.20s 2026-02-05 01:13:37.070724 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.92s 2026-02-05 01:13:37.070730 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.83s 2026-02-05 01:13:37.070736 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 18.79s 2026-02-05 01:13:37.070742 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.86s 2026-02-05 01:13:37.070748 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.55s 2026-02-05 01:13:37.070753 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.29s 2026-02-05 01:13:37.070759 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.54s 2026-02-05 01:13:37.070765 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.26s 2026-02-05 01:13:37.070771 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.22s 2026-02-05 01:13:37.070777 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.20s 2026-02-05 01:13:37.070782 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.86s 2026-02-05 01:13:37.070788 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.37s 2026-02-05 01:13:37.070794 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.64s 2026-02-05 01:13:37.070800 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.58s 2026-02-05 01:13:37.070806 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.21s 2026-02-05 01:13:37.070813 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 6.92s 2026-02-05 01:13:40.091523 | orchestrator | 2026-02-05 01:13:40 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:40.091575 | orchestrator | 2026-02-05 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:43.138859 | orchestrator | 2026-02-05 01:13:43 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:43.138914 | orchestrator | 2026-02-05 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:46.181374 | orchestrator | 2026-02-05 01:13:46 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:46.181437 | orchestrator | 2026-02-05 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:49.236604 | orchestrator | 2026-02-05 01:13:49 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:49.237248 | orchestrator | 2026-02-05 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:52.287601 | orchestrator | 2026-02-05 01:13:52 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:52.287651 | orchestrator | 2026-02-05 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:55.328567 | orchestrator | 2026-02-05 01:13:55 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:55.328614 | orchestrator | 2026-02-05 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:58.373068 | orchestrator | 2026-02-05 01:13:58 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:13:58.373142 | orchestrator | 2026-02-05 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:14:01.417718 | orchestrator | 2026-02-05 01:14:01 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:14:01.417777 | orchestrator | 2026-02-05 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:14:04.461444 | orchestrator | 2026-02-05 01:14:04 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:14:04.461545 | orchestrator | 2026-02-05 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:14:07.505311 | orchestrator | 2026-02-05 01:14:07 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:14:07.505370 | orchestrator | 2026-02-05 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:14:10.556881 | orchestrator | 2026-02-05 01:14:10 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:14:10.556987 | orchestrator | 2026-02-05 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:14:13.600603 | orchestrator | 2026-02-05 01:14:13 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:14:13.600690 | orchestrator | 2026-02-05 01:14:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:14:16.643162 | orchestrator | 2026-02-05 01:14:16 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:14:16.643252 | orchestrator | 2026-02-05 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:14:19.693185 | orchestrator | 2026-02-05 01:14:19 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:14:19.694190 | orchestrator | 2026-02-05 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:14:22.731562 | orchestrator | 2026-02-05 01:14:22 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state STARTED 2026-02-05 01:14:22.731640 | orchestrator | 2026-02-05 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:14:25.769641 | orchestrator | 2026-02-05 01:14:25 | INFO  | Task 4abdbdb5-9eff-467b-a59a-a219efe62b55 is in state SUCCESS 2026-02-05 01:14:25.771585 | orchestrator | 2026-02-05 01:14:25.771633 | orchestrator | 2026-02-05 01:14:25.771640 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:14:25.771645 | orchestrator | 2026-02-05 01:14:25.771649 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:14:25.771654 | orchestrator | Thursday 05 February 2026 01:09:33 +0000 (0:00:00.267) 0:00:00.267 ***** 2026-02-05 01:14:25.771658 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:25.771663 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:14:25.771667 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:14:25.771671 | orchestrator | 2026-02-05 01:14:25.771675 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:14:25.771679 | orchestrator | Thursday 05 February 2026 01:09:33 +0000 (0:00:00.302) 0:00:00.570 ***** 2026-02-05 01:14:25.771682 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-05 01:14:25.771687 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-05 01:14:25.771691 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-05 01:14:25.771694 | orchestrator | 2026-02-05 01:14:25.771698 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-05 01:14:25.771702 | orchestrator | 2026-02-05 01:14:25.771706 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 01:14:25.771709 | orchestrator | Thursday 05 February 2026 01:09:34 +0000 (0:00:00.422) 0:00:00.992 ***** 2026-02-05 01:14:25.771713 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:14:25.771718 | orchestrator | 2026-02-05 01:14:25.771721 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-05 01:14:25.771725 | orchestrator | Thursday 05 February 2026 01:09:34 +0000 (0:00:00.585) 0:00:01.577 ***** 2026-02-05 01:14:25.771729 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-05 01:14:25.771733 | orchestrator | 2026-02-05 01:14:25.771737 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-05 01:14:25.771755 | orchestrator | Thursday 05 February 2026 01:09:37 +0000 (0:00:03.140) 0:00:04.718 ***** 2026-02-05 01:14:25.771759 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-05 01:14:25.771763 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-05 01:14:25.771767 | orchestrator | 2026-02-05 01:14:25.771770 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-05 01:14:25.771774 | orchestrator | Thursday 05 February 2026 01:09:43 +0000 (0:00:05.888) 0:00:10.607 ***** 2026-02-05 01:14:25.771778 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:14:25.771782 | orchestrator | 2026-02-05 01:14:25.771786 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-05 01:14:25.771790 | orchestrator | Thursday 05 February 2026 01:09:47 +0000 (0:00:03.461) 0:00:14.068 ***** 2026-02-05 01:14:25.771794 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:14:25.771798 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-05 01:14:25.771802 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-05 01:14:25.771806 | orchestrator | 2026-02-05 01:14:25.771809 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-05 01:14:25.771813 | orchestrator | Thursday 05 February 2026 01:09:55 +0000 (0:00:08.382) 0:00:22.451 ***** 2026-02-05 01:14:25.771817 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:14:25.771821 | orchestrator | 2026-02-05 01:14:25.771825 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-05 01:14:25.771829 | orchestrator | Thursday 05 February 2026 01:09:58 +0000 (0:00:03.308) 0:00:25.759 ***** 2026-02-05 01:14:25.771832 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-05 01:14:25.771836 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-05 01:14:25.771840 | orchestrator | 2026-02-05 01:14:25.771844 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-05 01:14:25.771848 | orchestrator | Thursday 05 February 2026 01:10:06 +0000 (0:00:07.976) 0:00:33.736 ***** 2026-02-05 01:14:25.771852 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-05 01:14:25.771855 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-05 01:14:25.771859 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-05 01:14:25.771863 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-05 01:14:25.771872 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-05 01:14:25.771876 | orchestrator | 2026-02-05 01:14:25.771880 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 01:14:25.771884 | orchestrator | Thursday 05 February 2026 01:10:23 +0000 (0:00:17.013) 0:00:50.750 ***** 2026-02-05 01:14:25.771887 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:14:25.771891 | orchestrator | 2026-02-05 01:14:25.771895 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-05 01:14:25.771899 | orchestrator | Thursday 05 February 2026 01:10:24 +0000 (0:00:00.809) 0:00:51.559 ***** 2026-02-05 01:14:25.771903 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.771907 | orchestrator | 2026-02-05 01:14:25.771911 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-05 01:14:25.771914 | orchestrator | Thursday 05 February 2026 01:10:29 +0000 (0:00:04.819) 0:00:56.379 ***** 2026-02-05 01:14:25.771918 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.771922 | orchestrator | 2026-02-05 01:14:25.771944 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-05 01:14:25.771959 | orchestrator | Thursday 05 February 2026 01:10:35 +0000 (0:00:05.727) 0:01:02.106 ***** 2026-02-05 01:14:25.771963 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:25.772018 | orchestrator | 2026-02-05 01:14:25.772022 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-05 01:14:25.772026 | orchestrator | Thursday 05 February 2026 01:10:38 +0000 (0:00:03.535) 0:01:05.642 ***** 2026-02-05 01:14:25.772030 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-05 01:14:25.772034 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-05 01:14:25.772037 | orchestrator | 2026-02-05 01:14:25.772041 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-05 01:14:25.772045 | orchestrator | Thursday 05 February 2026 01:10:49 +0000 (0:00:11.227) 0:01:16.870 ***** 2026-02-05 01:14:25.772049 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-05 01:14:25.772053 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-05 01:14:25.772058 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-05 01:14:25.772063 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-05 01:14:25.772066 | orchestrator | 2026-02-05 01:14:25.772070 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-05 01:14:25.772074 | orchestrator | Thursday 05 February 2026 01:11:05 +0000 (0:00:15.378) 0:01:32.248 ***** 2026-02-05 01:14:25.772078 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.772082 | orchestrator | 2026-02-05 01:14:25.772085 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-05 01:14:25.772089 | orchestrator | Thursday 05 February 2026 01:11:09 +0000 (0:00:04.524) 0:01:36.772 ***** 2026-02-05 01:14:25.772093 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.772096 | orchestrator | 2026-02-05 01:14:25.772100 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-05 01:14:25.772104 | orchestrator | Thursday 05 February 2026 01:11:15 +0000 (0:00:05.204) 0:01:41.977 ***** 2026-02-05 01:14:25.772108 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:25.772111 | orchestrator | 2026-02-05 01:14:25.772115 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-05 01:14:25.772119 | orchestrator | Thursday 05 February 2026 01:11:15 +0000 (0:00:00.212) 0:01:42.189 ***** 2026-02-05 01:14:25.772123 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:25.772126 | orchestrator | 2026-02-05 01:14:25.772130 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 01:14:25.772134 | orchestrator | Thursday 05 February 2026 01:11:19 +0000 (0:00:04.361) 0:01:46.551 ***** 2026-02-05 01:14:25.772138 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:14:25.772141 | orchestrator | 2026-02-05 01:14:25.772145 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-05 01:14:25.772149 | orchestrator | Thursday 05 February 2026 01:11:20 +0000 (0:00:01.180) 0:01:47.732 ***** 2026-02-05 01:14:25.772152 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.772156 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:25.772161 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:25.772165 | orchestrator | 2026-02-05 01:14:25.772170 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-05 01:14:25.772174 | orchestrator | Thursday 05 February 2026 01:11:26 +0000 (0:00:05.251) 0:01:52.983 ***** 2026-02-05 01:14:25.772178 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.772183 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:25.772188 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:25.772192 | orchestrator | 2026-02-05 01:14:25.772196 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-05 01:14:25.772201 | orchestrator | Thursday 05 February 2026 01:11:30 +0000 (0:00:04.359) 0:01:57.342 ***** 2026-02-05 01:14:25.772209 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.772214 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:25.772218 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:25.772223 | orchestrator | 2026-02-05 01:14:25.772227 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-05 01:14:25.772231 | orchestrator | Thursday 05 February 2026 01:11:31 +0000 (0:00:00.810) 0:01:58.153 ***** 2026-02-05 01:14:25.772236 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:25.772244 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:14:25.772249 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:14:25.772253 | orchestrator | 2026-02-05 01:14:25.772258 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-05 01:14:25.772262 | orchestrator | Thursday 05 February 2026 01:11:32 +0000 (0:00:01.805) 0:01:59.959 ***** 2026-02-05 01:14:25.772268 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.772274 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:25.772280 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:25.772286 | orchestrator | 2026-02-05 01:14:25.772292 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-05 01:14:25.772298 | orchestrator | Thursday 05 February 2026 01:11:34 +0000 (0:00:01.155) 0:02:01.114 ***** 2026-02-05 01:14:25.772303 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.772309 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:25.772315 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:25.772321 | orchestrator | 2026-02-05 01:14:25.772327 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-05 01:14:25.772333 | orchestrator | Thursday 05 February 2026 01:11:35 +0000 (0:00:01.167) 0:02:02.282 ***** 2026-02-05 01:14:25.772339 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.772345 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:25.772351 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:25.772357 | orchestrator | 2026-02-05 01:14:25.772368 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-05 01:14:25.772375 | orchestrator | Thursday 05 February 2026 01:11:37 +0000 (0:00:02.087) 0:02:04.370 ***** 2026-02-05 01:14:25.772381 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.772388 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:25.772393 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:25.772397 | orchestrator | 2026-02-05 01:14:25.772402 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-05 01:14:25.772406 | orchestrator | Thursday 05 February 2026 01:11:39 +0000 (0:00:01.650) 0:02:06.021 ***** 2026-02-05 01:14:25.772411 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:25.772415 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:14:25.772420 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:14:25.772424 | orchestrator | 2026-02-05 01:14:25.772428 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-05 01:14:25.772433 | orchestrator | Thursday 05 February 2026 01:11:39 +0000 (0:00:00.883) 0:02:06.905 ***** 2026-02-05 01:14:25.772437 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:14:25.772442 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:14:25.772446 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:25.772451 | orchestrator | 2026-02-05 01:14:25.772455 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 01:14:25.772459 | orchestrator | Thursday 05 February 2026 01:11:43 +0000 (0:00:03.285) 0:02:10.191 ***** 2026-02-05 01:14:25.772464 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:14:25.772468 | orchestrator | 2026-02-05 01:14:25.772473 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-05 01:14:25.772478 | orchestrator | Thursday 05 February 2026 01:11:43 +0000 (0:00:00.532) 0:02:10.723 ***** 2026-02-05 01:14:25.772482 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:25.772490 | orchestrator | 2026-02-05 01:14:25.772495 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-05 01:14:25.772499 | orchestrator | Thursday 05 February 2026 01:11:47 +0000 (0:00:04.054) 0:02:14.778 ***** 2026-02-05 01:14:25.772504 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:25.772509 | orchestrator | 2026-02-05 01:14:25.772513 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-05 01:14:25.772518 | orchestrator | Thursday 05 February 2026 01:11:50 +0000 (0:00:02.976) 0:02:17.754 ***** 2026-02-05 01:14:25.772525 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-05 01:14:25.772531 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-05 01:14:25.772537 | orchestrator | 2026-02-05 01:14:25.772543 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-05 01:14:25.772549 | orchestrator | Thursday 05 February 2026 01:11:58 +0000 (0:00:07.715) 0:02:25.470 ***** 2026-02-05 01:14:25.772555 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:25.772561 | orchestrator | 2026-02-05 01:14:25.772568 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-05 01:14:25.772573 | orchestrator | Thursday 05 February 2026 01:12:01 +0000 (0:00:03.226) 0:02:28.696 ***** 2026-02-05 01:14:25.772578 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:25.772583 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:14:25.772589 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:14:25.772595 | orchestrator | 2026-02-05 01:14:25.772601 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-05 01:14:25.772606 | orchestrator | Thursday 05 February 2026 01:12:02 +0000 (0:00:00.308) 0:02:29.005 ***** 2026-02-05 01:14:25.772619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.772634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.772640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.772652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.772661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.772667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.772677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.772685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.772698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.772710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.772718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.772724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.772731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.772741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.772751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.772756 | orchestrator | 2026-02-05 01:14:25.772760 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-05 01:14:25.772768 | orchestrator | Thursday 05 February 2026 01:12:04 +0000 (0:00:02.347) 0:02:31.353 ***** 2026-02-05 01:14:25.772772 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:25.772776 | orchestrator | 2026-02-05 01:14:25.772780 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-05 01:14:25.772784 | orchestrator | Thursday 05 February 2026 01:12:04 +0000 (0:00:00.121) 0:02:31.474 ***** 2026-02-05 01:14:25.772787 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:25.772791 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:14:25.772795 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:14:25.772799 | orchestrator | 2026-02-05 01:14:25.772803 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-05 01:14:25.772807 | orchestrator | Thursday 05 February 2026 01:12:04 +0000 (0:00:00.375) 0:02:31.849 ***** 2026-02-05 01:14:25.772811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:25.772815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:25.772819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.772826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.772830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:25.772837 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:25.772845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:25.772849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:25.772853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.772857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.772864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:25.772868 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:14:25.772876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:25.772884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:25.772888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.772892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.772896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:25.772900 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:14:25.772904 | orchestrator | 2026-02-05 01:14:25.772908 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 01:14:25.772912 | orchestrator | Thursday 05 February 2026 01:12:05 +0000 (0:00:00.603) 0:02:32.453 ***** 2026-02-05 01:14:25.772916 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:14:25.772920 | orchestrator | 2026-02-05 01:14:25.772924 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-05 01:14:25.772972 | orchestrator | Thursday 05 February 2026 01:12:06 +0000 (0:00:00.528) 0:02:32.981 ***** 2026-02-05 01:14:25.772980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.772994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'contai2026-02-05 01:14:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:25.773001 | orchestrator | ner_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.773006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.773010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.773014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.773021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.773033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773079 | orchestrator | 2026-02-05 01:14:25.773083 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-05 01:14:25.773087 | orchestrator | Thursday 05 February 2026 01:12:11 +0000 (0:00:05.210) 0:02:38.192 ***** 2026-02-05 01:14:25.773091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:25.773095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:25.773099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.773109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.773116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:25.773120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:25.773124 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:14:25.773128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:25.773132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.773141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.773147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:25.773151 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:14:25.773159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:25.773164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:25.773168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.773172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.773179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:25.773184 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:25.773188 | orchestrator | 2026-02-05 01:14:25.773191 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-05 01:14:25.773195 | orchestrator | Thursday 05 February 2026 01:12:12 +0000 (0:00:01.237) 0:02:39.429 ***** 2026-02-05 01:14:25.773205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:25.773209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:25.773213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.773217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.773223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:25.773233 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:25.773246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:25.773256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:25.773268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.773274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.773281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:25.773295 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:14:25.773302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:25.773307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:25.773316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.773498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:25.773507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:25.773512 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:14:25.773516 | orchestrator | 2026-02-05 01:14:25.773520 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-05 01:14:25.773524 | orchestrator | Thursday 05 February 2026 01:12:13 +0000 (0:00:00.996) 0:02:40.425 ***** 2026-02-05 01:14:25.773528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.773539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.773547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.773555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.773559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.773563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.773570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773618 | orchestrator | 2026-02-05 01:14:25.773622 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-05 01:14:25.773626 | orchestrator | Thursday 05 February 2026 01:12:18 +0000 (0:00:05.229) 0:02:45.655 ***** 2026-02-05 01:14:25.773632 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-05 01:14:25.773637 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-05 01:14:25.773641 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-05 01:14:25.773645 | orchestrator | 2026-02-05 01:14:25.773649 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-05 01:14:25.773653 | orchestrator | Thursday 05 February 2026 01:12:20 +0000 (0:00:01.911) 0:02:47.566 ***** 2026-02-05 01:14:25.773660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.773665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.773673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.773677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.773684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.773688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.773694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.773743 | orchestrator | 2026-02-05 01:14:25.773747 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-05 01:14:25.773751 | orchestrator | Thursday 05 February 2026 01:12:37 +0000 (0:00:17.097) 0:03:04.664 ***** 2026-02-05 01:14:25.773755 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.773759 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:25.773763 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:25.773767 | orchestrator | 2026-02-05 01:14:25.773771 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-05 01:14:25.773774 | orchestrator | Thursday 05 February 2026 01:12:39 +0000 (0:00:01.427) 0:03:06.091 ***** 2026-02-05 01:14:25.773778 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-05 01:14:25.773782 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-05 01:14:25.773786 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-05 01:14:25.773790 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-05 01:14:25.773794 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-05 01:14:25.773798 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-05 01:14:25.773802 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-05 01:14:25.773806 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-05 01:14:25.773809 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-05 01:14:25.773813 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-05 01:14:25.773817 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-05 01:14:25.773821 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-05 01:14:25.773825 | orchestrator | 2026-02-05 01:14:25.773828 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-05 01:14:25.773832 | orchestrator | Thursday 05 February 2026 01:12:44 +0000 (0:00:05.265) 0:03:11.357 ***** 2026-02-05 01:14:25.773836 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-05 01:14:25.773840 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-05 01:14:25.773844 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-05 01:14:25.773848 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-05 01:14:25.773851 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-05 01:14:25.773855 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-05 01:14:25.773859 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-05 01:14:25.773865 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-05 01:14:25.773869 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-05 01:14:25.773873 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-05 01:14:25.773880 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-05 01:14:25.773884 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-05 01:14:25.773888 | orchestrator | 2026-02-05 01:14:25.773892 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-05 01:14:25.773896 | orchestrator | Thursday 05 February 2026 01:12:50 +0000 (0:00:05.967) 0:03:17.324 ***** 2026-02-05 01:14:25.773900 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-05 01:14:25.773904 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-05 01:14:25.773907 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-05 01:14:25.773911 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-05 01:14:25.773915 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-05 01:14:25.773919 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-05 01:14:25.773923 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-05 01:14:25.773948 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-05 01:14:25.773956 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-05 01:14:25.773961 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-05 01:14:25.773967 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-05 01:14:25.773973 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-05 01:14:25.773979 | orchestrator | 2026-02-05 01:14:25.773985 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-05 01:14:25.773991 | orchestrator | Thursday 05 February 2026 01:12:57 +0000 (0:00:06.853) 0:03:24.178 ***** 2026-02-05 01:14:25.773997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.774003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.774077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:25.774097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.774110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.774118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:25.774123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.774127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.774131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.774143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.774147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.774154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:25.774158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.774162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.774166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:25.774170 | orchestrator | 2026-02-05 01:14:25.774178 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 01:14:25.774181 | orchestrator | Thursday 05 February 2026 01:13:00 +0000 (0:00:03.778) 0:03:27.956 ***** 2026-02-05 01:14:25.774185 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:25.774189 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:14:25.774193 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:14:25.774197 | orchestrator | 2026-02-05 01:14:25.774200 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-05 01:14:25.774204 | orchestrator | Thursday 05 February 2026 01:13:01 +0000 (0:00:00.221) 0:03:28.178 ***** 2026-02-05 01:14:25.774208 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.774212 | orchestrator | 2026-02-05 01:14:25.774216 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-05 01:14:25.774219 | orchestrator | Thursday 05 February 2026 01:13:03 +0000 (0:00:02.457) 0:03:30.636 ***** 2026-02-05 01:14:25.774223 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.774227 | orchestrator | 2026-02-05 01:14:25.774231 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-05 01:14:25.774234 | orchestrator | Thursday 05 February 2026 01:13:05 +0000 (0:00:02.280) 0:03:32.917 ***** 2026-02-05 01:14:25.774241 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.774245 | orchestrator | 2026-02-05 01:14:25.774250 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-05 01:14:25.774254 | orchestrator | Thursday 05 February 2026 01:13:08 +0000 (0:00:02.443) 0:03:35.360 ***** 2026-02-05 01:14:25.774257 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.774261 | orchestrator | 2026-02-05 01:14:25.774265 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-05 01:14:25.774269 | orchestrator | Thursday 05 February 2026 01:13:11 +0000 (0:00:02.798) 0:03:38.158 ***** 2026-02-05 01:14:25.774273 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.774276 | orchestrator | 2026-02-05 01:14:25.774280 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-05 01:14:25.774284 | orchestrator | Thursday 05 February 2026 01:13:30 +0000 (0:00:19.477) 0:03:57.636 ***** 2026-02-05 01:14:25.774288 | orchestrator | 2026-02-05 01:14:25.774292 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-05 01:14:25.774296 | orchestrator | Thursday 05 February 2026 01:13:30 +0000 (0:00:00.066) 0:03:57.702 ***** 2026-02-05 01:14:25.774299 | orchestrator | 2026-02-05 01:14:25.774303 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-05 01:14:25.774307 | orchestrator | Thursday 05 February 2026 01:13:30 +0000 (0:00:00.066) 0:03:57.769 ***** 2026-02-05 01:14:25.774311 | orchestrator | 2026-02-05 01:14:25.774314 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-05 01:14:25.774320 | orchestrator | Thursday 05 February 2026 01:13:30 +0000 (0:00:00.067) 0:03:57.836 ***** 2026-02-05 01:14:25.774324 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.774328 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:25.774332 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:25.774336 | orchestrator | 2026-02-05 01:14:25.774339 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-05 01:14:25.774343 | orchestrator | Thursday 05 February 2026 01:13:48 +0000 (0:00:17.347) 0:04:15.184 ***** 2026-02-05 01:14:25.774347 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:25.774351 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:25.774355 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.774359 | orchestrator | 2026-02-05 01:14:25.774362 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-05 01:14:25.774366 | orchestrator | Thursday 05 February 2026 01:13:56 +0000 (0:00:08.321) 0:04:23.505 ***** 2026-02-05 01:14:25.774370 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:25.774374 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.774377 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:25.774384 | orchestrator | 2026-02-05 01:14:25.774388 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-05 01:14:25.774392 | orchestrator | Thursday 05 February 2026 01:14:07 +0000 (0:00:10.826) 0:04:34.332 ***** 2026-02-05 01:14:25.774396 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.774399 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:25.774403 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:25.774407 | orchestrator | 2026-02-05 01:14:25.774411 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-05 01:14:25.774415 | orchestrator | Thursday 05 February 2026 01:14:17 +0000 (0:00:10.487) 0:04:44.819 ***** 2026-02-05 01:14:25.774418 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:25.774422 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:25.774426 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:25.774430 | orchestrator | 2026-02-05 01:14:25.774434 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:14:25.774439 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:14:25.774444 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 01:14:25.774448 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 01:14:25.774452 | orchestrator | 2026-02-05 01:14:25.774455 | orchestrator | 2026-02-05 01:14:25.774459 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:14:25.774463 | orchestrator | Thursday 05 February 2026 01:14:23 +0000 (0:00:05.554) 0:04:50.374 ***** 2026-02-05 01:14:25.774467 | orchestrator | =============================================================================== 2026-02-05 01:14:25.774471 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.48s 2026-02-05 01:14:25.774475 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.35s 2026-02-05 01:14:25.774479 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.10s 2026-02-05 01:14:25.774482 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.01s 2026-02-05 01:14:25.774486 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.38s 2026-02-05 01:14:25.774490 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.23s 2026-02-05 01:14:25.774494 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.83s 2026-02-05 01:14:25.774497 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.49s 2026-02-05 01:14:25.774501 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.38s 2026-02-05 01:14:25.774505 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 8.32s 2026-02-05 01:14:25.774509 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.98s 2026-02-05 01:14:25.774513 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.72s 2026-02-05 01:14:25.774519 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.85s 2026-02-05 01:14:25.774523 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.97s 2026-02-05 01:14:25.774527 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.89s 2026-02-05 01:14:25.774530 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 5.73s 2026-02-05 01:14:25.774534 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.56s 2026-02-05 01:14:25.774538 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.27s 2026-02-05 01:14:25.774542 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.25s 2026-02-05 01:14:25.774546 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.23s 2026-02-05 01:14:28.810491 | orchestrator | 2026-02-05 01:14:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:31.853811 | orchestrator | 2026-02-05 01:14:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:34.897871 | orchestrator | 2026-02-05 01:14:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:37.942542 | orchestrator | 2026-02-05 01:14:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:40.978555 | orchestrator | 2026-02-05 01:14:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:44.022979 | orchestrator | 2026-02-05 01:14:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:47.063631 | orchestrator | 2026-02-05 01:14:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:50.099627 | orchestrator | 2026-02-05 01:14:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:53.140015 | orchestrator | 2026-02-05 01:14:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:56.178433 | orchestrator | 2026-02-05 01:14:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:59.220671 | orchestrator | 2026-02-05 01:14:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:15:02.266090 | orchestrator | 2026-02-05 01:15:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:15:05.305531 | orchestrator | 2026-02-05 01:15:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:15:08.346121 | orchestrator | 2026-02-05 01:15:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:15:11.383353 | orchestrator | 2026-02-05 01:15:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:15:14.425351 | orchestrator | 2026-02-05 01:15:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:15:17.468049 | orchestrator | 2026-02-05 01:15:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:15:20.505820 | orchestrator | 2026-02-05 01:15:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:15:23.543586 | orchestrator | 2026-02-05 01:15:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:15:26.584587 | orchestrator | 2026-02-05 01:15:26.848755 | orchestrator | 2026-02-05 01:15:26.856592 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Feb 5 01:15:26 UTC 2026 2026-02-05 01:15:26.856894 | orchestrator | 2026-02-05 01:15:27.185648 | orchestrator | ok: Runtime: 0:35:41.336038 2026-02-05 01:15:27.446758 | 2026-02-05 01:15:27.446942 | TASK [Bootstrap services] 2026-02-05 01:15:28.235261 | orchestrator | 2026-02-05 01:15:28.235406 | orchestrator | # BOOTSTRAP 2026-02-05 01:15:28.235419 | orchestrator | 2026-02-05 01:15:28.235426 | orchestrator | + set -e 2026-02-05 01:15:28.235433 | orchestrator | + echo 2026-02-05 01:15:28.235442 | orchestrator | + echo '# BOOTSTRAP' 2026-02-05 01:15:28.235453 | orchestrator | + echo 2026-02-05 01:15:28.235478 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-05 01:15:28.243955 | orchestrator | + set -e 2026-02-05 01:15:28.244049 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-05 01:15:32.180465 | orchestrator | 2026-02-05 01:15:32 | INFO  | It takes a moment until task 11095783-6060-42c8-b395-18c91a28ab46 (flavor-manager) has been started and output is visible here. 2026-02-05 01:15:40.538126 | orchestrator | 2026-02-05 01:15:35 | INFO  | Flavor SCS-1L-1 created 2026-02-05 01:15:40.538257 | orchestrator | 2026-02-05 01:15:35 | INFO  | Flavor SCS-1L-1-5 created 2026-02-05 01:15:40.538266 | orchestrator | 2026-02-05 01:15:35 | INFO  | Flavor SCS-1V-2 created 2026-02-05 01:15:40.538272 | orchestrator | 2026-02-05 01:15:36 | INFO  | Flavor SCS-1V-2-5 created 2026-02-05 01:15:40.538276 | orchestrator | 2026-02-05 01:15:36 | INFO  | Flavor SCS-1V-4 created 2026-02-05 01:15:40.538281 | orchestrator | 2026-02-05 01:15:36 | INFO  | Flavor SCS-1V-4-10 created 2026-02-05 01:15:40.538286 | orchestrator | 2026-02-05 01:15:36 | INFO  | Flavor SCS-1V-8 created 2026-02-05 01:15:40.538290 | orchestrator | 2026-02-05 01:15:36 | INFO  | Flavor SCS-1V-8-20 created 2026-02-05 01:15:40.538307 | orchestrator | 2026-02-05 01:15:36 | INFO  | Flavor SCS-2V-4 created 2026-02-05 01:15:40.538311 | orchestrator | 2026-02-05 01:15:37 | INFO  | Flavor SCS-2V-4-10 created 2026-02-05 01:15:40.538315 | orchestrator | 2026-02-05 01:15:37 | INFO  | Flavor SCS-2V-8 created 2026-02-05 01:15:40.538319 | orchestrator | 2026-02-05 01:15:37 | INFO  | Flavor SCS-2V-8-20 created 2026-02-05 01:15:40.538323 | orchestrator | 2026-02-05 01:15:37 | INFO  | Flavor SCS-2V-16 created 2026-02-05 01:15:40.538327 | orchestrator | 2026-02-05 01:15:37 | INFO  | Flavor SCS-2V-16-50 created 2026-02-05 01:15:40.538331 | orchestrator | 2026-02-05 01:15:38 | INFO  | Flavor SCS-4V-8 created 2026-02-05 01:15:40.538335 | orchestrator | 2026-02-05 01:15:38 | INFO  | Flavor SCS-4V-8-20 created 2026-02-05 01:15:40.538339 | orchestrator | 2026-02-05 01:15:38 | INFO  | Flavor SCS-4V-16 created 2026-02-05 01:15:40.538342 | orchestrator | 2026-02-05 01:15:38 | INFO  | Flavor SCS-4V-16-50 created 2026-02-05 01:15:40.538346 | orchestrator | 2026-02-05 01:15:38 | INFO  | Flavor SCS-4V-32 created 2026-02-05 01:15:40.538350 | orchestrator | 2026-02-05 01:15:38 | INFO  | Flavor SCS-4V-32-100 created 2026-02-05 01:15:40.538354 | orchestrator | 2026-02-05 01:15:39 | INFO  | Flavor SCS-8V-16 created 2026-02-05 01:15:40.538358 | orchestrator | 2026-02-05 01:15:39 | INFO  | Flavor SCS-8V-16-50 created 2026-02-05 01:15:40.538362 | orchestrator | 2026-02-05 01:15:39 | INFO  | Flavor SCS-8V-32 created 2026-02-05 01:15:40.538366 | orchestrator | 2026-02-05 01:15:39 | INFO  | Flavor SCS-8V-32-100 created 2026-02-05 01:15:40.538370 | orchestrator | 2026-02-05 01:15:39 | INFO  | Flavor SCS-16V-32 created 2026-02-05 01:15:40.538374 | orchestrator | 2026-02-05 01:15:39 | INFO  | Flavor SCS-16V-32-100 created 2026-02-05 01:15:40.538377 | orchestrator | 2026-02-05 01:15:40 | INFO  | Flavor SCS-2V-4-20s created 2026-02-05 01:15:40.538381 | orchestrator | 2026-02-05 01:15:40 | INFO  | Flavor SCS-4V-8-50s created 2026-02-05 01:15:40.538385 | orchestrator | 2026-02-05 01:15:40 | INFO  | Flavor SCS-8V-32-100s created 2026-02-05 01:15:42.787050 | orchestrator | 2026-02-05 01:15:42 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-05 01:15:52.904757 | orchestrator | 2026-02-05 01:15:52 | INFO  | Task 4fff2612-086e-4eed-ba1d-ff64b16d6574 (bootstrap-basic) was prepared for execution. 2026-02-05 01:15:52.904875 | orchestrator | 2026-02-05 01:15:52 | INFO  | It takes a moment until task 4fff2612-086e-4eed-ba1d-ff64b16d6574 (bootstrap-basic) has been started and output is visible here. 2026-02-05 01:16:37.777685 | orchestrator | 2026-02-05 01:16:37.777839 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-05 01:16:37.777856 | orchestrator | 2026-02-05 01:16:37.777864 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 01:16:37.777872 | orchestrator | Thursday 05 February 2026 01:15:56 +0000 (0:00:00.060) 0:00:00.060 ***** 2026-02-05 01:16:37.777880 | orchestrator | ok: [localhost] 2026-02-05 01:16:37.777889 | orchestrator | 2026-02-05 01:16:37.777897 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-05 01:16:37.777905 | orchestrator | Thursday 05 February 2026 01:15:58 +0000 (0:00:01.647) 0:00:01.708 ***** 2026-02-05 01:16:37.777913 | orchestrator | ok: [localhost] 2026-02-05 01:16:37.777921 | orchestrator | 2026-02-05 01:16:37.777929 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-05 01:16:37.777937 | orchestrator | Thursday 05 February 2026 01:16:07 +0000 (0:00:09.267) 0:00:10.975 ***** 2026-02-05 01:16:37.777944 | orchestrator | changed: [localhost] 2026-02-05 01:16:37.777951 | orchestrator | 2026-02-05 01:16:37.777958 | orchestrator | TASK [Create public network] *************************************************** 2026-02-05 01:16:37.777965 | orchestrator | Thursday 05 February 2026 01:16:14 +0000 (0:00:07.312) 0:00:18.288 ***** 2026-02-05 01:16:37.777973 | orchestrator | changed: [localhost] 2026-02-05 01:16:37.777980 | orchestrator | 2026-02-05 01:16:37.777988 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-05 01:16:37.777996 | orchestrator | Thursday 05 February 2026 01:16:20 +0000 (0:00:05.352) 0:00:23.641 ***** 2026-02-05 01:16:37.778007 | orchestrator | changed: [localhost] 2026-02-05 01:16:37.778075 | orchestrator | 2026-02-05 01:16:37.778082 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-05 01:16:37.778087 | orchestrator | Thursday 05 February 2026 01:16:26 +0000 (0:00:05.966) 0:00:29.607 ***** 2026-02-05 01:16:37.778092 | orchestrator | changed: [localhost] 2026-02-05 01:16:37.778096 | orchestrator | 2026-02-05 01:16:37.778100 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-05 01:16:37.778105 | orchestrator | Thursday 05 February 2026 01:16:30 +0000 (0:00:04.062) 0:00:33.670 ***** 2026-02-05 01:16:37.778110 | orchestrator | changed: [localhost] 2026-02-05 01:16:37.778114 | orchestrator | 2026-02-05 01:16:37.778119 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-05 01:16:37.778137 | orchestrator | Thursday 05 February 2026 01:16:34 +0000 (0:00:03.728) 0:00:37.399 ***** 2026-02-05 01:16:37.778143 | orchestrator | ok: [localhost] 2026-02-05 01:16:37.778150 | orchestrator | 2026-02-05 01:16:37.778157 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:16:37.778165 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:16:37.778174 | orchestrator | 2026-02-05 01:16:37.778181 | orchestrator | 2026-02-05 01:16:37.778188 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:16:37.778194 | orchestrator | Thursday 05 February 2026 01:16:37 +0000 (0:00:03.478) 0:00:40.878 ***** 2026-02-05 01:16:37.778201 | orchestrator | =============================================================================== 2026-02-05 01:16:37.778208 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.27s 2026-02-05 01:16:37.778214 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.31s 2026-02-05 01:16:37.778221 | orchestrator | Set public network to default ------------------------------------------- 5.97s 2026-02-05 01:16:37.778228 | orchestrator | Create public network --------------------------------------------------- 5.35s 2026-02-05 01:16:37.778258 | orchestrator | Create public subnet ---------------------------------------------------- 4.06s 2026-02-05 01:16:37.778266 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.73s 2026-02-05 01:16:37.778274 | orchestrator | Create manager role ----------------------------------------------------- 3.48s 2026-02-05 01:16:37.778281 | orchestrator | Gathering Facts --------------------------------------------------------- 1.65s 2026-02-05 01:16:40.143522 | orchestrator | 2026-02-05 01:16:40 | INFO  | It takes a moment until task ae333cfe-d70b-4d85-a5b2-d7614619e6f7 (image-manager) has been started and output is visible here. 2026-02-05 01:17:21.778553 | orchestrator | 2026-02-05 01:16:42 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-05 01:17:21.778665 | orchestrator | 2026-02-05 01:16:42 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-05 01:17:21.778677 | orchestrator | 2026-02-05 01:16:42 | INFO  | Importing image Cirros 0.6.2 2026-02-05 01:17:21.778684 | orchestrator | 2026-02-05 01:16:42 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-05 01:17:21.778692 | orchestrator | 2026-02-05 01:16:45 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:17:21.778699 | orchestrator | 2026-02-05 01:16:47 | INFO  | Waiting for import to complete... 2026-02-05 01:17:21.778705 | orchestrator | 2026-02-05 01:16:57 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-05 01:17:21.778712 | orchestrator | 2026-02-05 01:16:57 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-05 01:17:21.778718 | orchestrator | 2026-02-05 01:16:57 | INFO  | Setting internal_version = 0.6.2 2026-02-05 01:17:21.778725 | orchestrator | 2026-02-05 01:16:57 | INFO  | Setting image_original_user = cirros 2026-02-05 01:17:21.778758 | orchestrator | 2026-02-05 01:16:57 | INFO  | Adding tag os:cirros 2026-02-05 01:17:21.778765 | orchestrator | 2026-02-05 01:16:58 | INFO  | Setting property architecture: x86_64 2026-02-05 01:17:21.778772 | orchestrator | 2026-02-05 01:16:58 | INFO  | Setting property hw_disk_bus: scsi 2026-02-05 01:17:21.778778 | orchestrator | 2026-02-05 01:16:58 | INFO  | Setting property hw_rng_model: virtio 2026-02-05 01:17:21.778784 | orchestrator | 2026-02-05 01:16:58 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-05 01:17:21.778791 | orchestrator | 2026-02-05 01:16:59 | INFO  | Setting property hw_watchdog_action: reset 2026-02-05 01:17:21.778798 | orchestrator | 2026-02-05 01:16:59 | INFO  | Setting property hypervisor_type: qemu 2026-02-05 01:17:21.778804 | orchestrator | 2026-02-05 01:16:59 | INFO  | Setting property os_distro: cirros 2026-02-05 01:17:21.778811 | orchestrator | 2026-02-05 01:17:00 | INFO  | Setting property os_purpose: minimal 2026-02-05 01:17:21.778817 | orchestrator | 2026-02-05 01:17:00 | INFO  | Setting property replace_frequency: never 2026-02-05 01:17:21.778824 | orchestrator | 2026-02-05 01:17:00 | INFO  | Setting property uuid_validity: none 2026-02-05 01:17:21.778831 | orchestrator | 2026-02-05 01:17:00 | INFO  | Setting property provided_until: none 2026-02-05 01:17:21.778838 | orchestrator | 2026-02-05 01:17:01 | INFO  | Setting property image_description: Cirros 2026-02-05 01:17:21.778844 | orchestrator | 2026-02-05 01:17:01 | INFO  | Setting property image_name: Cirros 2026-02-05 01:17:21.778850 | orchestrator | 2026-02-05 01:17:01 | INFO  | Setting property internal_version: 0.6.2 2026-02-05 01:17:21.778856 | orchestrator | 2026-02-05 01:17:01 | INFO  | Setting property image_original_user: cirros 2026-02-05 01:17:21.778885 | orchestrator | 2026-02-05 01:17:02 | INFO  | Setting property os_version: 0.6.2 2026-02-05 01:17:21.778900 | orchestrator | 2026-02-05 01:17:02 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-05 01:17:21.778909 | orchestrator | 2026-02-05 01:17:02 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-05 01:17:21.778915 | orchestrator | 2026-02-05 01:17:02 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-05 01:17:21.778921 | orchestrator | 2026-02-05 01:17:02 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-05 01:17:21.778927 | orchestrator | 2026-02-05 01:17:02 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-05 01:17:21.778934 | orchestrator | 2026-02-05 01:17:03 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-05 01:17:21.778943 | orchestrator | 2026-02-05 01:17:03 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-05 01:17:21.778950 | orchestrator | 2026-02-05 01:17:03 | INFO  | Importing image Cirros 0.6.3 2026-02-05 01:17:21.778956 | orchestrator | 2026-02-05 01:17:03 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-05 01:17:21.778962 | orchestrator | 2026-02-05 01:17:04 | INFO  | Waiting for import to complete... 2026-02-05 01:17:21.778967 | orchestrator | 2026-02-05 01:17:15 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-05 01:17:21.778989 | orchestrator | 2026-02-05 01:17:15 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-05 01:17:21.778995 | orchestrator | 2026-02-05 01:17:15 | INFO  | Setting internal_version = 0.6.3 2026-02-05 01:17:21.779000 | orchestrator | 2026-02-05 01:17:15 | INFO  | Setting image_original_user = cirros 2026-02-05 01:17:21.779005 | orchestrator | 2026-02-05 01:17:15 | INFO  | Adding tag os:cirros 2026-02-05 01:17:21.779011 | orchestrator | 2026-02-05 01:17:15 | INFO  | Setting property architecture: x86_64 2026-02-05 01:17:21.779017 | orchestrator | 2026-02-05 01:17:16 | INFO  | Setting property hw_disk_bus: scsi 2026-02-05 01:17:21.779023 | orchestrator | 2026-02-05 01:17:16 | INFO  | Setting property hw_rng_model: virtio 2026-02-05 01:17:21.779028 | orchestrator | 2026-02-05 01:17:16 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-05 01:17:21.779034 | orchestrator | 2026-02-05 01:17:17 | INFO  | Setting property hw_watchdog_action: reset 2026-02-05 01:17:21.779040 | orchestrator | 2026-02-05 01:17:17 | INFO  | Setting property hypervisor_type: qemu 2026-02-05 01:17:21.779046 | orchestrator | 2026-02-05 01:17:17 | INFO  | Setting property os_distro: cirros 2026-02-05 01:17:21.779053 | orchestrator | 2026-02-05 01:17:17 | INFO  | Setting property os_purpose: minimal 2026-02-05 01:17:21.779059 | orchestrator | 2026-02-05 01:17:17 | INFO  | Setting property replace_frequency: never 2026-02-05 01:17:21.779065 | orchestrator | 2026-02-05 01:17:18 | INFO  | Setting property uuid_validity: none 2026-02-05 01:17:21.779072 | orchestrator | 2026-02-05 01:17:18 | INFO  | Setting property provided_until: none 2026-02-05 01:17:21.779078 | orchestrator | 2026-02-05 01:17:18 | INFO  | Setting property image_description: Cirros 2026-02-05 01:17:21.779084 | orchestrator | 2026-02-05 01:17:18 | INFO  | Setting property image_name: Cirros 2026-02-05 01:17:21.779090 | orchestrator | 2026-02-05 01:17:19 | INFO  | Setting property internal_version: 0.6.3 2026-02-05 01:17:21.779096 | orchestrator | 2026-02-05 01:17:19 | INFO  | Setting property image_original_user: cirros 2026-02-05 01:17:21.779109 | orchestrator | 2026-02-05 01:17:19 | INFO  | Setting property os_version: 0.6.3 2026-02-05 01:17:21.779116 | orchestrator | 2026-02-05 01:17:20 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-05 01:17:21.779122 | orchestrator | 2026-02-05 01:17:20 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-05 01:17:21.779128 | orchestrator | 2026-02-05 01:17:20 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-05 01:17:21.779135 | orchestrator | 2026-02-05 01:17:20 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-05 01:17:21.779141 | orchestrator | 2026-02-05 01:17:20 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-05 01:17:22.072221 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-05 01:17:24.249414 | orchestrator | 2026-02-05 01:17:24 | INFO  | date: 2026-02-04 2026-02-05 01:17:24.249481 | orchestrator | 2026-02-05 01:17:24 | INFO  | image: octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-05 01:17:24.249689 | orchestrator | 2026-02-05 01:17:24 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-05 01:17:24.250074 | orchestrator | 2026-02-05 01:17:24 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2.CHECKSUM 2026-02-05 01:17:24.386307 | orchestrator | 2026-02-05 01:17:24 | INFO  | checksum: fa81774e60e440b52eb763bc24f9302dc0d7fa56080593c2ba4182f5e23fdc54 2026-02-05 01:17:24.456943 | orchestrator | 2026-02-05 01:17:24 | INFO  | It takes a moment until task 7c72e423-7674-4998-b43d-3306cebc2ab4 (image-manager) has been started and output is visible here. 2026-02-05 01:19:18.748280 | orchestrator | 2026-02-05 01:17:26 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-04' 2026-02-05 01:19:18.748386 | orchestrator | 2026-02-05 01:17:26 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2: 200 2026-02-05 01:19:18.748402 | orchestrator | 2026-02-05 01:17:26 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-04 2026-02-05 01:19:18.748410 | orchestrator | 2026-02-05 01:17:26 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-05 01:19:18.748418 | orchestrator | 2026-02-05 01:17:27 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:19:18.748426 | orchestrator | 2026-02-05 01:17:29 | INFO  | Waiting for import to complete... 2026-02-05 01:19:18.748435 | orchestrator | 2026-02-05 01:17:39 | INFO  | Waiting for import to complete... 2026-02-05 01:19:18.748440 | orchestrator | 2026-02-05 01:17:49 | INFO  | Waiting for import to complete... 2026-02-05 01:19:18.748445 | orchestrator | 2026-02-05 01:17:59 | INFO  | Waiting for import to complete... 2026-02-05 01:19:18.748451 | orchestrator | 2026-02-05 01:18:09 | INFO  | Waiting for import to complete... 2026-02-05 01:19:18.748456 | orchestrator | 2026-02-05 01:18:19 | INFO  | Waiting for import to complete... 2026-02-05 01:19:18.748461 | orchestrator | 2026-02-05 01:18:29 | INFO  | Waiting for import to complete... 2026-02-05 01:19:18.748465 | orchestrator | 2026-02-05 01:18:39 | INFO  | Waiting for import to complete... 2026-02-05 01:19:18.748469 | orchestrator | 2026-02-05 01:18:49 | INFO  | Waiting for import to complete... 2026-02-05 01:19:18.748473 | orchestrator | 2026-02-05 01:18:59 | INFO  | Waiting for import to complete... 2026-02-05 01:19:18.748496 | orchestrator | 2026-02-05 01:19:10 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:19:18.748500 | orchestrator | 2026-02-05 01:19:12 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:19:18.748505 | orchestrator | 2026-02-05 01:19:14 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:19:18.748509 | orchestrator | 2026-02-05 01:19:16 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:19:18.748513 | orchestrator | 2026-02-05 01:19:18 | ERROR  | Image OpenStack Octavia Amphora 2026-02-04 seems stuck in queued state 2026-02-05 01:19:18.748518 | orchestrator | 2026-02-05 01:19:18 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-05 01:19:18.748523 | orchestrator | 2026-02-05 01:19:18 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-05 01:19:18.748527 | orchestrator | 2026-02-05 01:19:18 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-05 01:19:18.748531 | orchestrator | 2026-02-05 01:19:18 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-05 01:19:18.748535 | orchestrator | 2026-02-05 01:19:18.748541 | orchestrator | ERROR: One or more errors occurred during the execution of the program, please check the output. 2026-02-05 01:19:19.161710 | orchestrator | ERROR 2026-02-05 01:19:19.161976 | orchestrator | { 2026-02-05 01:19:19.162017 | orchestrator | "delta": "0:03:51.172543", 2026-02-05 01:19:19.162041 | orchestrator | "end": "2026-02-05 01:19:19.027974", 2026-02-05 01:19:19.162063 | orchestrator | "msg": "non-zero return code", 2026-02-05 01:19:19.162083 | orchestrator | "rc": 1, 2026-02-05 01:19:19.162103 | orchestrator | "start": "2026-02-05 01:15:27.855431" 2026-02-05 01:19:19.162122 | orchestrator | } failure 2026-02-05 01:19:19.180591 | 2026-02-05 01:19:19.180762 | PLAY RECAP 2026-02-05 01:19:19.180843 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-02-05 01:19:19.181310 | 2026-02-05 01:19:19.472093 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-02-05 01:19:19.473445 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-05 01:19:20.230253 | 2026-02-05 01:19:20.230450 | PLAY [Post output play] 2026-02-05 01:19:20.247169 | 2026-02-05 01:19:20.247317 | LOOP [stage-output : Register sources] 2026-02-05 01:19:20.317416 | 2026-02-05 01:19:20.317762 | TASK [stage-output : Check sudo] 2026-02-05 01:19:21.174849 | orchestrator | sudo: a password is required 2026-02-05 01:19:21.356473 | orchestrator | ok: Runtime: 0:00:00.017550 2026-02-05 01:19:21.371715 | 2026-02-05 01:19:21.371908 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-05 01:19:21.420195 | 2026-02-05 01:19:21.420486 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-05 01:19:21.500126 | orchestrator | ok 2026-02-05 01:19:21.509404 | 2026-02-05 01:19:21.509594 | LOOP [stage-output : Ensure target folders exist] 2026-02-05 01:19:22.013186 | orchestrator | ok: "docs" 2026-02-05 01:19:22.013525 | 2026-02-05 01:19:22.337042 | orchestrator | ok: "artifacts" 2026-02-05 01:19:22.631653 | orchestrator | ok: "logs" 2026-02-05 01:19:22.650912 | 2026-02-05 01:19:22.651102 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-05 01:19:22.688707 | 2026-02-05 01:19:22.688981 | TASK [stage-output : Make all log files readable] 2026-02-05 01:19:23.011159 | orchestrator | ok 2026-02-05 01:19:23.019082 | 2026-02-05 01:19:23.019207 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-05 01:19:23.055351 | orchestrator | skipping: Conditional result was False 2026-02-05 01:19:23.065239 | 2026-02-05 01:19:23.065365 | TASK [stage-output : Discover log files for compression] 2026-02-05 01:19:23.089233 | orchestrator | skipping: Conditional result was False 2026-02-05 01:19:23.102237 | 2026-02-05 01:19:23.102384 | LOOP [stage-output : Archive everything from logs] 2026-02-05 01:19:23.147696 | 2026-02-05 01:19:23.147861 | PLAY [Post cleanup play] 2026-02-05 01:19:23.157230 | 2026-02-05 01:19:23.157337 | TASK [Set cloud fact (Zuul deployment)] 2026-02-05 01:19:23.221986 | orchestrator | ok 2026-02-05 01:19:23.233241 | 2026-02-05 01:19:23.233361 | TASK [Set cloud fact (local deployment)] 2026-02-05 01:19:23.277995 | orchestrator | skipping: Conditional result was False 2026-02-05 01:19:23.292103 | 2026-02-05 01:19:23.292271 | TASK [Clean the cloud environment] 2026-02-05 01:19:25.008408 | orchestrator | 2026-02-05 01:19:25 - clean up servers 2026-02-05 01:19:25.770712 | orchestrator | 2026-02-05 01:19:25 - testbed-manager 2026-02-05 01:19:25.850509 | orchestrator | 2026-02-05 01:19:25 - testbed-node-1 2026-02-05 01:19:25.940624 | orchestrator | 2026-02-05 01:19:25 - testbed-node-5 2026-02-05 01:19:26.025000 | orchestrator | 2026-02-05 01:19:26 - testbed-node-3 2026-02-05 01:19:26.110761 | orchestrator | 2026-02-05 01:19:26 - testbed-node-0 2026-02-05 01:19:26.197460 | orchestrator | 2026-02-05 01:19:26 - testbed-node-4 2026-02-05 01:19:26.283961 | orchestrator | 2026-02-05 01:19:26 - testbed-node-2 2026-02-05 01:19:26.377013 | orchestrator | 2026-02-05 01:19:26 - clean up keypairs 2026-02-05 01:19:26.392703 | orchestrator | 2026-02-05 01:19:26 - testbed 2026-02-05 01:19:26.415908 | orchestrator | 2026-02-05 01:19:26 - wait for servers to be gone 2026-02-05 01:19:35.153207 | orchestrator | 2026-02-05 01:19:35 - clean up ports 2026-02-05 01:19:35.339476 | orchestrator | 2026-02-05 01:19:35 - 00e69b7c-72cf-4579-a893-ab8e44f997de 2026-02-05 01:19:35.581658 | orchestrator | 2026-02-05 01:19:35 - 1415758e-b897-43c6-8ed2-f4ed2dcf3aaf 2026-02-05 01:19:35.898080 | orchestrator | 2026-02-05 01:19:35 - 314e84d2-8ffb-4501-ab1f-8c59782dfb75 2026-02-05 01:19:36.120344 | orchestrator | 2026-02-05 01:19:36 - 6d841e05-685b-43e9-aae2-4fb01299eb64 2026-02-05 01:19:36.331787 | orchestrator | 2026-02-05 01:19:36 - 7036252b-423d-4b35-99a9-8469c5be9c41 2026-02-05 01:19:36.539157 | orchestrator | 2026-02-05 01:19:36 - 71cb8696-26aa-4165-b421-fbfb6c641aa5 2026-02-05 01:19:36.924129 | orchestrator | 2026-02-05 01:19:36 - d4af560e-a880-402d-8330-8a7f2bfc24f6 2026-02-05 01:19:37.158294 | orchestrator | 2026-02-05 01:19:37 - clean up volumes 2026-02-05 01:19:37.317059 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-0-node-base 2026-02-05 01:19:37.355444 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-3-node-base 2026-02-05 01:19:37.395804 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-1-node-base 2026-02-05 01:19:37.440474 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-5-node-base 2026-02-05 01:19:37.480822 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-2-node-base 2026-02-05 01:19:37.522620 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-4-node-base 2026-02-05 01:19:37.565690 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-manager-base 2026-02-05 01:19:37.606370 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-0-node-3 2026-02-05 01:19:37.651136 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-4-node-4 2026-02-05 01:19:37.694782 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-3-node-3 2026-02-05 01:19:37.735083 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-6-node-3 2026-02-05 01:19:37.778381 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-1-node-4 2026-02-05 01:19:37.821264 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-7-node-4 2026-02-05 01:19:37.861047 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-2-node-5 2026-02-05 01:19:37.903051 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-5-node-5 2026-02-05 01:19:37.943722 | orchestrator | 2026-02-05 01:19:37 - testbed-volume-8-node-5 2026-02-05 01:19:37.982393 | orchestrator | 2026-02-05 01:19:37 - disconnect routers 2026-02-05 01:19:38.100268 | orchestrator | 2026-02-05 01:19:38 - testbed 2026-02-05 01:19:39.500522 | orchestrator | 2026-02-05 01:19:39 - clean up subnets 2026-02-05 01:19:39.565858 | orchestrator | 2026-02-05 01:19:39 - subnet-testbed-management 2026-02-05 01:19:39.740144 | orchestrator | 2026-02-05 01:19:39 - clean up networks 2026-02-05 01:19:39.940827 | orchestrator | 2026-02-05 01:19:39 - net-testbed-management 2026-02-05 01:19:40.260983 | orchestrator | 2026-02-05 01:19:40 - clean up security groups 2026-02-05 01:19:40.302268 | orchestrator | 2026-02-05 01:19:40 - testbed-management 2026-02-05 01:19:40.409638 | orchestrator | 2026-02-05 01:19:40 - testbed-node 2026-02-05 01:19:40.520700 | orchestrator | 2026-02-05 01:19:40 - clean up floating ips 2026-02-05 01:19:40.554006 | orchestrator | 2026-02-05 01:19:40 - 81.163.192.243 2026-02-05 01:19:40.931546 | orchestrator | 2026-02-05 01:19:40 - clean up routers 2026-02-05 01:19:41.027612 | orchestrator | 2026-02-05 01:19:41 - testbed 2026-02-05 01:19:42.403740 | orchestrator | ok: Runtime: 0:00:18.712059 2026-02-05 01:19:42.406345 | 2026-02-05 01:19:42.406451 | PLAY RECAP 2026-02-05 01:19:42.406526 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-05 01:19:42.406587 | 2026-02-05 01:19:42.536881 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-05 01:19:42.539429 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-05 01:19:43.300863 | 2026-02-05 01:19:43.301022 | PLAY [Cleanup play] 2026-02-05 01:19:43.317042 | 2026-02-05 01:19:43.317166 | TASK [Set cloud fact (Zuul deployment)] 2026-02-05 01:19:43.370063 | orchestrator | ok 2026-02-05 01:19:43.377628 | 2026-02-05 01:19:43.377748 | TASK [Set cloud fact (local deployment)] 2026-02-05 01:19:43.401654 | orchestrator | skipping: Conditional result was False 2026-02-05 01:19:43.412958 | 2026-02-05 01:19:43.413082 | TASK [Clean the cloud environment] 2026-02-05 01:19:44.646468 | orchestrator | 2026-02-05 01:19:44 - clean up servers 2026-02-05 01:19:45.115637 | orchestrator | 2026-02-05 01:19:45 - clean up keypairs 2026-02-05 01:19:45.132208 | orchestrator | 2026-02-05 01:19:45 - wait for servers to be gone 2026-02-05 01:19:45.177381 | orchestrator | 2026-02-05 01:19:45 - clean up ports 2026-02-05 01:19:45.252494 | orchestrator | 2026-02-05 01:19:45 - clean up volumes 2026-02-05 01:19:45.333479 | orchestrator | 2026-02-05 01:19:45 - disconnect routers 2026-02-05 01:19:45.368048 | orchestrator | 2026-02-05 01:19:45 - clean up subnets 2026-02-05 01:19:45.389716 | orchestrator | 2026-02-05 01:19:45 - clean up networks 2026-02-05 01:19:45.552550 | orchestrator | 2026-02-05 01:19:45 - clean up security groups 2026-02-05 01:19:45.588334 | orchestrator | 2026-02-05 01:19:45 - clean up floating ips 2026-02-05 01:19:45.611298 | orchestrator | 2026-02-05 01:19:45 - clean up routers 2026-02-05 01:19:45.949632 | orchestrator | ok: Runtime: 0:00:01.439019 2026-02-05 01:19:45.953434 | 2026-02-05 01:19:45.953632 | PLAY RECAP 2026-02-05 01:19:45.953782 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-05 01:19:45.953853 | 2026-02-05 01:19:46.086811 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-05 01:19:46.088613 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-05 01:19:46.841754 | 2026-02-05 01:19:46.841918 | PLAY [Base post-fetch] 2026-02-05 01:19:46.858309 | 2026-02-05 01:19:46.858448 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-05 01:19:46.924671 | orchestrator | skipping: Conditional result was False 2026-02-05 01:19:46.939540 | 2026-02-05 01:19:46.939756 | TASK [fetch-output : Set log path for single node] 2026-02-05 01:19:46.986331 | orchestrator | ok 2026-02-05 01:19:46.994630 | 2026-02-05 01:19:46.994757 | LOOP [fetch-output : Ensure local output dirs] 2026-02-05 01:19:47.504590 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/1eb1a1bfd15e4e1e93b557e18e3ea3fc/work/logs" 2026-02-05 01:19:47.789009 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1eb1a1bfd15e4e1e93b557e18e3ea3fc/work/artifacts" 2026-02-05 01:19:48.060959 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1eb1a1bfd15e4e1e93b557e18e3ea3fc/work/docs" 2026-02-05 01:19:48.086105 | 2026-02-05 01:19:48.086281 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-05 01:19:49.048032 | orchestrator | changed: .d..t...... ./ 2026-02-05 01:19:49.048295 | orchestrator | changed: All items complete 2026-02-05 01:19:49.048332 | 2026-02-05 01:19:49.753101 | orchestrator | changed: .d..t...... ./ 2026-02-05 01:19:50.543231 | orchestrator | changed: .d..t...... ./ 2026-02-05 01:19:50.576228 | 2026-02-05 01:19:50.576374 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-05 01:19:50.623019 | orchestrator | skipping: Conditional result was False 2026-02-05 01:19:50.627823 | orchestrator | skipping: Conditional result was False 2026-02-05 01:19:50.650148 | 2026-02-05 01:19:50.650263 | PLAY RECAP 2026-02-05 01:19:50.650345 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-05 01:19:50.650389 | 2026-02-05 01:19:50.774045 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-05 01:19:50.776685 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-05 01:19:51.577351 | 2026-02-05 01:19:51.577602 | PLAY [Base post] 2026-02-05 01:19:51.594355 | 2026-02-05 01:19:51.594535 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-05 01:19:53.077003 | orchestrator | changed 2026-02-05 01:19:53.084402 | 2026-02-05 01:19:53.084515 | PLAY RECAP 2026-02-05 01:19:53.084595 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-05 01:19:53.084663 | 2026-02-05 01:19:53.220114 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-05 01:19:53.222474 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-05 01:19:54.019792 | 2026-02-05 01:19:54.019966 | PLAY [Base post-logs] 2026-02-05 01:19:54.030542 | 2026-02-05 01:19:54.030697 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-05 01:19:54.502597 | localhost | changed 2026-02-05 01:19:54.513804 | 2026-02-05 01:19:54.513964 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-05 01:19:54.550141 | localhost | ok 2026-02-05 01:19:54.553524 | 2026-02-05 01:19:54.553648 | TASK [Set zuul-log-path fact] 2026-02-05 01:19:54.579968 | localhost | ok 2026-02-05 01:19:54.588893 | 2026-02-05 01:19:54.589009 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-05 01:19:54.627302 | localhost | ok 2026-02-05 01:19:54.635535 | 2026-02-05 01:19:54.635745 | TASK [upload-logs : Create log directories] 2026-02-05 01:19:55.140677 | localhost | changed 2026-02-05 01:19:55.143578 | 2026-02-05 01:19:55.143688 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-05 01:19:55.674975 | localhost -> localhost | ok: Runtime: 0:00:00.007228 2026-02-05 01:19:55.679888 | 2026-02-05 01:19:55.680022 | TASK [upload-logs : Upload logs to log server] 2026-02-05 01:19:56.250253 | localhost | Output suppressed because no_log was given 2026-02-05 01:19:56.254656 | 2026-02-05 01:19:56.255036 | LOOP [upload-logs : Compress console log and json output] 2026-02-05 01:19:56.320547 | localhost | skipping: Conditional result was False 2026-02-05 01:19:56.325535 | localhost | skipping: Conditional result was False 2026-02-05 01:19:56.338345 | 2026-02-05 01:19:56.338668 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-05 01:19:56.389657 | localhost | skipping: Conditional result was False 2026-02-05 01:19:56.390410 | 2026-02-05 01:19:56.393193 | localhost | skipping: Conditional result was False 2026-02-05 01:19:56.400808 | 2026-02-05 01:19:56.401071 | LOOP [upload-logs : Upload console log and json output]